What AI Bubble? Top Trends in Tech and Jobs in 2026
88 min
•Dec 22, 20254 months agoSummary
Geoff Nielson and Jeremy Roberts debate whether AI represents a genuine market bubble or early-stage transformative technology, examining job displacement risks, comparing AI hype to historical tech cycles, and discussing practical use cases versus speculative investment.
Insights
- AI market valuations remain reasonable by historical PE ratio standards compared to dot-com and 2008 crises, suggesting bubble concerns may be overstated
- Most AI implementations fail to deliver promised value because companies lack clear use cases and treat AI as a solution looking for problems rather than solving specific business needs
- Job displacement from AI is less about technology capability and more about corporate cost-cutting cycles; companies are using AI as cover for post-pandemic over-hiring corrections
- Consumer sentiment toward AI is neutral-to-negative despite massive investment, indicating a gap between investor excitement and actual user demand
- Self-driving cars represent one of the few proven AI use cases with genuine consumer adoption, though weather and geographic limitations remain significant hurdles
Trends
AI 2.0 shift: Moving from 'add AI everywhere' mentality to focused, proven use cases with measurable ROIConsolidation in LLM market despite competition; OpenAI, Google, and Microsoft dominating despite claims of democratizationDiminishing returns on model training; exponential cost increases for marginal capability gains suggesting plateau in LLM developmentCorporate bloat reduction disguised as AI transformation; workforce optimization framed as technological necessity rather than cost-cuttingRegulatory pressure on big tech consolidation in AI space, with antitrust concerns limiting acquisition strategiesShift from consumer-facing AI products to enterprise software licensing models (Microsoft Copilot pricing strategy)Backlash against AI terminology itself; brands moving away from 'AI' branding due to negative consumer perceptionEmergence of smaller, specialized models (Meta's Llama) as cost-effective alternatives to massive foundation modelsAgentic AI positioned as next frontier but remains largely unproven with significant technical and ethical challengesSocietal risk concentration: AI job displacement affecting high-wage professionals (radiologists, translators) gaining political attention
Topics
AI Market Bubble Assessment and ValuationLLM Plateau and Diminishing Returns on ComputeAI Implementation Failure Rates and ROI ChallengesJob Displacement and Workforce Transition PlanningAgentic AI Development and Unproven Use CasesSelf-Driving Car Adoption and Geographic LimitationsEnterprise AI Vendor Lock-in and Pricing StrategiesConsumer Sentiment and AI Branding ChallengesBlockchain and Crypto as Failed Enterprise TechnologyMetaverse and Mixed Reality Market FailureApple's AI Strategy and Leadership SuccessionRegulatory Consolidation in Big TechAI vs. Faster Horses: Incremental vs. Transformative InnovationUBI and Social Safety Nets for Displaced WorkersFolding Phone Technology and Hardware Innovation
Companies
Microsoft
Criticized for using AI as pricing justification (Copilot uplift); example of vendor price increases without proven v...
OpenAI
ChatGPT creator; discussed as first-mover in LLM market with proven consumer adoption and killer app status
Google
Competing in LLM space with Gemini; facing pressure from OpenAI; discussed regarding search consolidation parallels
NVIDIA
GPU infrastructure provider; valued at ~$5 trillion; discussed as beneficiary of AI gold rush mentality and compute a...
Meta
Rebranded to focus on metaverse; massive AI investment; discussed as example of failed technology bet and AI pivot
Apple
Abandoned car project despite billions invested; discussed regarding AI strategy uncertainty and leadership successio...
Tesla
Full Self-Driving product discussed as overhyped; mentioned alongside Palantir as overvalued stocks with high PE ratios
Waymo
Alphabet's self-driving subsidiary; cited as first company proving autonomous vehicle viability at scale in Austin an...
Palantir
Mentioned as overvalued stock with astronomical PE ratios; excluded from balanced portfolio recommendations
Anthropic
Claude LLM creator; discussed as competitor in crowded LLM market with minimal differentiation from alternatives
DeepSeek
Chinese AI company; cited as example of achieving competitive results with substantially less compute than Western mo...
IBM
Partnered with Maersk on failed TradeLens blockchain platform; example of enterprise blockchain failure
Maersk
Shipping company; invested in TradeLens blockchain with IBM; example of blockchain hype without practical value
Australian Securities Exchange
Announced then abandoned blockchain-based trading platform; cited as failed enterprise blockchain implementation
Uber
Integrating Waymo autonomous vehicles into ride-hailing platform; discussed regarding driver displacement and wage pr...
Amazon
Last-mile delivery workforce at risk from automation; discussed regarding societal impact of job displacement
Samsung
Released thin folding phone; discussed as hardware innovation competitor to anticipated Apple foldable iPhone
Perplexity
AI search startup; attempted to acquire Chrome from Google; discussed regarding regulatory concerns and consolidation
InfoTech Research Group
B2B research and advisory firm; episode sponsor providing IT strategy and vendor negotiation support
People
Geoff Nielson
Host of Digital Disruption podcast; skeptical of AI bubble narrative; argues for realistic assessment of technology m...
Jeremy Roberts
Co-host; more bullish on AI inevitability; advocates for self-driving cars and long-term technology adoption curves
Sam Altman
OpenAI CEO; mentioned as former Y Combinator president who helped OpenAI achieve first-mover advantage in LLM market
Sundar Pichai
Google CEO; mentioned regarding AI CEO capability claims and regulatory scrutiny of big tech consolidation
Tim Cook
Apple CEO; discussed as supply chain expert whose successor will determine Apple's AI and product strategy direction
Satya Nadella
Microsoft CEO; cloud division background; discussed as example of cloud-focused leader driving company transformation
Andy Jassy
Amazon CEO; AWS background; cited as example of cloud executive successfully leading major tech company
Elon Musk
Tesla CEO; discussed regarding repeated claims that full self-driving was 'one year away' for many years
Ed Zitron
Podcast guest; discussed corporate AI adoption and how companies frame workforce reductions as AI-driven transformation
David Graeber
Author of 'Bullshit Jobs'; referenced regarding corporate bloat and roles that may not drive organizational value
Marvin Minsky
AI pioneer; mentioned regarding history of AI research dating back to 1950s and AI summer cycles
Quotes
"I'll just add AI into the tech. And it's like, why is there AI in my Instagram? I don't know. They're gonna put them in Reese's Cups too soon. It's gonna be peanut butter, chocolate, and AI."
Geoff Nielson•Opening
"Do I think that there's too much hype around AI right now and the capabilities that AI has and what it's going to be able to deliver in 2026? 100%."
Jeremy Roberts•Early discussion
"It's like building the railroads before we've invented the steam locomotive."
Jeremy Roberts•Infrastructure discussion
"The market, the market, you know, is crazier than ever. And as we see more sort of retail investors or like Reddit, like the meme investors, the game stoppers of the world, it's fundamentally changed the dynamics of the market."
Jeremy Roberts•Market dynamics
"I think you should be asking, will I still have a job at the end of 2026 because of where we are in the economic cycle? That's a real concern for a lot of people."
Geoff Nielson•Jobs discussion
"LLMs, I think we're starting to see a plateau there. Agentic seems to be the wagon that everyone is hitching their horse to. And I just don't see that panning out in the next 18 months."
Jeremy Roberts•AI capabilities
"What is the use case here? How do you square that with the proliferation of LLMs in everyday life? Like everybody chat GPT is everything."
Geoff Nielson•Consumer adoption
Full Transcript
The idea this year and in the past 18 months has been, I'll just add AI into the tech. And it's like, why is there AI in my Instagram? I don't know. They're gonna put them in Reese's Cups too soon. It's gonna be peanut butter, chocolate, and AI. I will buy those. There'll be a subscription. It'll be $1,000 a month. Hey everyone, we are looking ahead today at 2026. Still nobody knows what the hell's going on or what's gonna happen. And so I'm here with Jeremy Roberts to take a lens at AI, how it's impacting the economy, our work, our life. And Jeremy, what should we talk about today? I mean, I think you hit it right there, Jeff. It's really, I think, all about the impact that AI is gonna have on the world around us, specifically the economy. So the big question I've been asking this, they've been asking this, is are we in a bubble, do you think, fueled by AI? So that's an interesting question to me. And it's one that I feel like you have to tease apart to get a real answer at. Do I think that there's too much hype around AI right now and the capabilities that AI has and what it's going to be able to deliver in 2026? 100%. I think that... No, I disagree. I disagree super hard about that. But keep going and I'll tell you why you're wrong in a sec. All right, sounds great. Look, I think that we're in a really unique situation right now where we've got a whole ecosystem of tech companies and the megaphones around them that are deliberately hawking this and want people to believe that it can do these incredible things. And don't get me wrong, it can do incredible things, but the degree to which we're seeing executives banging the drum of AI and the degree to which the market and investors are rewarding them for it and eating it up, I just at some point, this has to have a crash course with reality. And I don't exactly know what's gonna happen to the market and to be honest, I think it's a fool's game to try and time any of this, but I think the market day by day is struggling to stay aligned with reality. Well, I mean, I wouldn't give any market advice to anyone. I'm still holding Nortel and Blackberry, but when you think about sort of this compared to maybe other historical bubbles, right? I mean, if we go back in time, we could look at, say, the subprime mortgage crisis. That was 2008, 2009, probably the most impactful recession in most people's lives thus far. We can look at the dot-com boom. And in both of those cases, the market was way more overvalued from a PE ratio perspective than it is now, right? So this idea that even in the scheme of things, we're in some sort of a bubble, we're approaching that territory, I mean, it doesn't actually stand up to deeper scrutiny. And you said earlier that the hype is significant for a lot of these tools. I mean, what if I were to just say that we're early? Copilot might not be the best thing ever, but like, was the PC the best thing ever when Apple II came out in 1978? I think that's broadly right, but I wanna dig in a little bit more to the price earnings ratio piece that you're talking about, because I think you're right. And I think that is the single best argument right now for us not being in a bubble, is that if you look at these ratios, even for a lot of the top companies, they're not astronomical, they're pretty sane. There are some, you know, Palantir and Tesla excluded and like, I'm not holding money with them right now for those reasons, but if you hold any ETFs, Jeff, you probably are. Well, fair enough. But to me, like the secret sauce behind that is that the earnings that we're seeing right now are still a result of this gold rush, that people are dumping investment into these areas with the promise of some nebulous return at some future point, right? So you've got big tech companies and Microsoft is a great example, by the way, you mentioned co-pilots. And if people stop seeing the return on this, then I think there's really an opportunity for panic in the markets. And let me just put a little bit more nuance on that. I think the risk of people not seeing the returns they expect and a shock downwards is a lot more realistic in 2026 than, oh, it's even better than we thought and the stock shoots up, right? Like I feel like that's already being priced in right now. Sure. So that's my concern. So if Nvidia is at like, you know, five trillion, I mean, who knows what it'll be at when people are watching this. But, you know, it's an incredibly valuable company, but that's because people are actually paying money for it. So you're, they're products, I mean. So your argument is essentially that once this gold rush mentality fades away, I'm gonna need fewer GPUs and the NVIDIAs might retreat and that it's really hard to argue that they're underhyped. Maybe it's either appropriate level of hype or an overhype sort of situation. Yeah, I mean, it's like, you'll hear a lot people compare this like NVIDIA infrastructure boom to laying the railroad infrastructure, you know, 150 years ago. And I think that's right, except that like, a railroad is a physical asset that you can drive trains on. And the nature of this is we're building an awful lot of infrastructure with the promise that we're gonna be able to do something really useful with it. I think it's likely that we're gonna be able to do something really useful with it. But a lot of the use cases are still semi proven at this point, I'll say. You know, certainly LLMs, I'm worried that they're starting to plateau in terms of capabilities. And, you know, agentic to me is a great hype word, but it's still very much unproven. So there's just an awful lot riding on those panning out, like it's like building the railroads before we've invented, you know, the steam locomotive. So how much more railroad are we going to keep building? Yeah, and the railroad metaphor is a good one. Because if you go back in time and you look at historical railroad maps, like there were a lot more railroads in North America historically than there are now. And we tore them all out because they weren't effective, but they were overbuilt, having a bunch of different private companies compete to build railroads, it turned out was actually not the most efficient way to deliver that service to a population. So you might be onto something. The other thing that I'll suggest sort of from a bubble perspective is that it's actually quite hard to be inside a bubble and sort of call it out effectively. And if everybody inside the bubble is saying, we're in a bubble, but a bubble is a speculative thing, it's actually kind of hard for it to turn out to be a bubble because nobody is really being misled. And so that's the other thing that gets me is that everybody's out there being like, yeah, we might be overvalued. I don't know that this has historically been the same. Like people weren't yelling about the subprime housing crisis, in 2006, 2007, it was so rare that they made a movie about it, called The Big Short, like the one guy who actually didn't do that, who has called 27 of the last three recessions, by the way. So that's sort of where I'm coming from on that. Well, and it's really interesting because if we were having this conversation six months ago, and I was saying, there's a bubble, there's a bubble, people would say, you're crazy, like you're this voice of dissent, right? And I feel like somewhere in the last handful of months, we've turned a corner where it's become very in vogue to say there is a bubble, right? And like to say there's no bubble has actually become counterculture, which is interesting from like a narrative perspective, but we're still not seeing people pull their money out. Exactly. Right? And so that's fascinating to me. But as I said, like to me, this comes down to, there's a lot more risk of underperformance than overperformance. And that's the lens I'm looking at the market through next year. If you work in IT, InfoTech Research Group is a name you need to know. No matter what your needs are, InfoTech has you covered. AI strategy, covered. Disaster recovery, covered. Vendor negotiation, covered. InfoTech supports you with the best practice research and a team of analysts standing by, ready to help you tackle your toughest challenges. Check it out at the link below, and don't forget to like and subscribe. I guess, you know, as an admonition to all of our viewers, right? This idea that the market is inherently tethered to reality, I think we can dispense with that. Oh yeah, no, the market, the market, you know, is crazier than ever. And as we see more sort of, like, I don't know if you want to call them like retail investors or like Reddit, like the meme investors, the game stoppers of the world, it's fundamentally changed the dynamics of the market in a really big way that I think untethers it more, if that's even an appropriate metaphor, from reality. And what's a bit concerning to me is that a lot of the areas where we've seen big gains this year seem to follow that pattern, right? So whether you're looking at some specific, like, meme stocks or some of the stocks like Tesla and Pellantia, whether you're looking at, you know, crypto is a good example. The other thing that gave me pause is I was reading an investigation about gold and it was saying that if you look at why gold has increased so much this year and what's different from before, a lot of it is speculative kind of meme stock investing, which is terrifying because it's meant to be, you know, kind of a hedge against everything else going on in the market and yet we've broken it because it's the same, you know, meme investors investing in it. So I agree with you on that point. So I think that we all got to put our money in whatever meme coin is going to come out. We can call it hedge coin or something like that. We should have our own meme coin. Maybe that maybe that's our 2026. And then digital disruption token. And there's definitely an option there. OK, so let's talk about maybe the broader implications of AI just beyond market pricing. Because to me, another big one, I know you've talked about this on the podcast a time in the past year is jobs, right? So is AI going to take my job? Will co-pilot replace me? Will chat GBT take the job of every translator, historian, accountant out there in the world today? So at a high level, what's your take on that based on all the conversations you've had? Yeah, so I mean, I'm very cynical by nature. And I think we're seeing a few different things happen at the same time right now. I think there's a lot of talk of AI taking jobs. And I think that's extremely convenient for the companies out there that may have overhired in kind of the post pandemic, immediate post pandemic era where inflation rates were low. There was a lot of kind of fast, almost surprising growth, quantitative easing. Yeah, and they're looking at, OK, well, how do I undo some of that hiring that was a little bit cowboy-ish at the time? And how do I do it in a way that still is good for PR, right? Or makes me look ahead of the curve, right? I think there's a lot of self interest. And it's funny, and I was talking to Ed Zich on a handful of weeks ago about this. But the idea that if you actually read a lot of these press releases, if you dig past the headline, it's not we're downsizing this many people and replacing them with AI. It's we're downsizing this many people and we're investigating what capabilities AI has that we can help. We're very excited. We're very excited about the prospect of AI, right? And so it's a bit of a tough pill to swallow. And for a lot of people, I don't know that it matters that much. But if you're asking, I don't know that you should be asking, will I take my job? I think you should be asking, will I still have a job at the end of 2026 because of where we are in the economic cycle? That's a real concern for a lot of people. And, you know, they may be pissed off more so that it's being AI washed, that it really has nothing to do with AI. And it's completely untethered from whether this AI technology pans out or not. Now, the implication there, though, of course, is once we get past this part of the cycle, do we go back to business as usual and reach a point where, OK, we're back on an upswing and crap, we overfired and now we need to hire a bunch more people or is there a new market equilibrium? Yeah, because if there's a new market equilibrium because of AI, then a lot of these jobs are not coming back. And that's a really scary proposition. I think, frankly, as a society, like whether it's for individuals, whether it's for, you know, clusters of people or households, whether it's for us as a society, it's a really real proposition and we don't know what all those people are going to do about it. Now, a couple of things I want to say. Number one is you can probably tell by my outlook, I don't think we're going to see a cyclical upswing and hiring in 2026. That would really surprise me. I think we're going to see the number continue to go down and probably stay down, probably stabilize at some point or come rough to stabilizing, unless the market goes completely to hell in which case we may see. But to be honest, that's one of the best defenses I think we have right now against a true crash, which is to your point against earnings. Is companies seem to be concerned about the bubble bursting? Yes. And they're preemptively, you know, kind of leaning out their operations to use a euphemism for firing people. And so they're somewhat protected from that. So if we look beyond that, though, I think we will start to see, regardless of whether AI, you know, is completely transformational or even incrementally better than it is today, I think we are going to start to see a change in the composition of our workforce and a lot of organizations. I think, you know, and maybe it's a hot take and maybe it'll rub some people the wrong way. I think we have seen a huge explosion of corporate roles in the last two years, five years, 10 years, 15 years, like functions like it, HR, ops, these roles that are not customer facing and in many cases are not directly increasing the capabilities of the organization and like driving their competitiveness forward. We've just seen a huge proliferation of that. And it's tough, I think, for any investor or any business leader to be really excited about that. Like, hooray, our company's gotten so big and cumbersome. Yeah. And so if I think about the future shape of organizations, I think looking at roles that are more customer facing, more looking at capability building and more just kind of nimble around what needs to be done versus I have a job and it's task based. And I do this task on Monday and this task on Tuesday and this task on Wednesday and like rinse and repeat. I think that's the way we're going to start to see some of this evolve. So that does sound a lot like a talking point, though, right? Like my organization is so bloated and there's all these people who don't do anything. But like those people were hired for a particular reason. Sometimes it's to grow the thief of their boss. Fair enough. We've all read the David Graber book, Bullshit Jobs. If you haven't, you should. It's very interesting. But in a lot of cases, those people are performing functions that when they are removed, they do dramatically reduce the efficiency of the organization. So I think it's easy to say that there's a lot of bloat. But I do think that those roles are maybe more complicated than they seem, which is part of the reason that AI implementations have been so tough. That said, I agree in principle with the idea that artificial intelligence is likely to impact those back end roles. But you remember that MIT Nanda study that said that 95% of AI projects don't produce any real value. I think that's because they're picking the wrong things to focus on. So like if I take, you know, a person who's sort of acting as human middleware and swap them out for an artificial intelligence solution that does the same thing pretty effectively, I mean, I'm not really moving the needle. So I think that is broadly in agreement with the point that you're making. But my question for you is, if that's not valuable, why do you think companies are doing it to the degree that they are? That seems to be the primary use case. And like, do you see a way forward where they can really derive value from those solutions in a jobs context? Sorry, you mean you're referring to the AI adoption? Yes. Yeah. I want to like, I want to park the AI productivity gains and like the value piece for a second, because I want to come back to something you said earlier about all these jobs must be increasingly, increasingly efficiency of the organization, because I just so strongly disagree with that. And I think that that rests on a belief that an organization works as this kind of organism where everyone understands what's best for the organization and you can transmit with near perfect efficiency what the goals are. And there's some sort of global optimization there. And I think the bigger the company gets, the more difficult it is to do that. And I think there's just like with humans and anything they do, there's this broken telephone effect and you end up with a lot of roles that are opportunistic. And I mean, you kind of glossed over it as like, FIFTUNS. But I think there's a lot of what makes sense at this price point or with this level of growth. And there's a lot of excitement of, ooh, I can finally do this. But this is all, you know, to me, and it's a silly analogy, but employees are an operating expense, not a capital expense. Right? It's not, oh, I bought myself a new jacket. It's I signed myself up for a new streaming service that I'm paying for every year. And if it turns out there's not a lot of content on there, at some point, you know, you're going to want to cut the cord on that. And like I want to address that I'm being extremely facetious about those livelihoods. But I think that organizations do have a responsibility for making sure that the people in the organization are working on things that are actually valuable to the broader mission of the organization. And, you know, there's a philosophical question that makes me really, really uneasy in all of this, which is how many employees should an organization have? And I don't really know how to answer that question. And I'm worried the answer is as few as humanly possible. And that is a problem for society, especially in an age of increased automation. But I'll, you know, kind of jump back to your question, which is about what's going on with AI and AI not creating value in organizations. And I mean, I think the reality is here twofold. First of all, I think organizations have just broadly done a fairly, a fairly poor job of implementing AI in a thoughtful way over the past 18 months. Sure. I think there's been this belief at an executive level that AI is easy. Oh, wow. It can do all these things now. So let's just plug it in wherever we can. You know, AI, AI here, AI for everybody. Right. So good you have Semity Sam. Yeah, thank you. I like my cowboy-ish finger guns. And I think that's been radically disproven over the last handful of months because we've seen the rate at which these initiatives are failing. And so I think the era of let's just bolt AI onto everything is coming to an end. I think that's a really good thing. And I think it's forcing organizations to say, oh, shit, this is harder and more expensive than we thought. And by the way, probably involves those pesky IT people. And, you know, I have to laugh because like every IT project in history has got been late and over budget and is extremely difficult and more complex than people think. And suddenly AI is like easy. Like, who saw that coming? Like, yeah, obviously it's harder than people think. And so I think we're going to see sort of AI take two. And if I can call it that in 2026, where people now need to decide, OK, what do I want to be serious about here? And what am I really willing to properly invest in? But that's still happening at a time where AI is at a very overly experimental state, right? Like, I think we all know the same count on one hand, proven use cases for AI in terms of, oh, yeah, it's good for customer service. Developers can get more productivity, translation, you know, translation, you know, things like that. And for everything else, it's still sort of question mark. And, you know, the reality of question mark is you have to suss it out. And that means that you're going to fail a lot of the time. I used to I spent years running an innovation function. And the reason, you know, one of the things I feel good about is we failed fairly frequently, right? And, you know, we would try to fail fast and we had lots of successes, too. But that's the nature of innovation is you can't get it right all the time. It's like research. If somebody's telling you everything they research is 100 percent known, that's not research. And so I think organizations need to have an appetite for that if they're moving beyond any of the bread and butter use cases of AI. So those bread and butter use cases, though, usually run the gamut of, well, I should take run the gamut, but they sit on a spectrum somewhere that's usually it's pitchable, right? So the company that's buying sort of understands the problem to be solved. So it's not really boundary pushing. It's possible for the buyer, usually a chief information officer, a chief technology officer to justify it from like a cost perspective. So, you know, we're going to reduce the amount of time it takes to do expense reporting by introducing efficiency into the expense reporting process, because AI is going to read all the receipts. That's a real thing, by the way, and it's very handy. And once it goes through that sausage making machine, so to speak, what comes out the other end is usually not fundamentally transformative to use the not real Henry Ford quote. We're getting a lot of faster horses here, I think. So like, how do we go from the faster horse to inventing the automobile? And like, is AI part of that story or is AI just a way to solve the problem of wages in these jobs that maybe shouldn't exist in the first place? Well, I think there's one more element that we need to, you know, I want to get serious about, which is that there's an awful lot of vendors and, you know, I'll dunk on Microsoft for a minute that are using AI as an excuse to just raise their prices and collect more revenue, right, where they say, hey, Mr. and Mrs. Byer, we have all this new AI functionality. And by the way, that's justifying why we're charging you 20 percent more next year. The co-pilot uplift. Yeah, which I mean, has been going really well so far. We'll see how well that like it's been going really well so far because buyers don't know any better or they're still, you know, a little bit naive about this. It hasn't been going as well as they want it to. No, it hasn't. And I think it's about to go a lot worse, by the way, because I think people are starting to look under the hood and saying, hey, this is actually isn't doing something I want or it's not doing what you promised. So that, I think, is a huge motivation for what we're seeing here. But the question you have about where is AI a faster horse versus where is it a car in an era of horses is like the fundamental question of our time in AI because everyone is advertising cars and everyone is selling, you know, faster horses and everyone is buying faster and everyone is buying faster like people show up and you might pitch them the car and they'll be like, I don't know if I can park that in my stable, you know, maybe I should get one of them faster horses like that. It's not just the vendors. No, no, and it's by the way, it's happening internal to organizations where investors are demanding cars and CEOs and boards are saying, we build cars here, you should know that. Value our company is that we build cars and then go back to, you know, their IT team and say like, we're building cars, right? And they're saying just horses back here and like that that reality. You got car money. Yeah. Yeah. So so that's a conflict and there's some really, really perverse incentives that play right now. Yeah. And everybody is kind of looking to everybody else to say like, but surely you have a car over there or like, do our do our competitors have cars? Do our vendors have cars? Yeah. And there's just not a lot of cars right now. And we're like to mix metaphors from earlier, we're sure building a lot of like asphalt roads right now as though there's going to be cars. And so that's going to be really interesting. And by the way, I do think, you know, to come back to something I said earlier, I really think that LLMs are not going to be the road. I think LLMs are the faster horse right now. Yeah. And they can do some really cool stuff and they can augment individual productivity, but they are not the sea change that they're being pitched as. We may see some cars. This like metaphors, like totally gotten away from us. It's like totally busted at this point. The horse has left the barn. Oh my God. Like don't think I have a lot to slap you, but I sure do want to get an AI to do it and be a digital. Yeah. So I think we're going to see specific areas outside of LLMs. Yep. That will make us say, wow, in 2026. Like, oh, wow, I didn't know that AI could do that. But even AI, like we've got to be honest, is a marketing buzzword. Yeah. Right? Like that, like what the hell does AI mean? And by the way, like we're like, we're complicit in this. Like I am part of the problem here. But like, you know, to get to a bit of meteor conversation. Yeah. LLMs, I think we're starting to see a plateau there. A genetic seems to be the wagon that everyone is hitching their horse to. Yeah. God, I've broken this analogy. And I just don't see that panning out in the next 18 months, to be honest. It's just too complex to get beyond the basics. Will there be more automation happening in organizations? Yes. Will we see some exciting new use cases? Yes. But the fact that, you know, I've spent this year talking to some of the brightest minds in academia, in journalism, on the front lines, actually developing this stuff. And anytime you're like, tell me about the use case you're most excited about, and they kind of, you know, politic their way out of the question is extremely telling to me. Yeah. Well, you mentioned earlier, right, about sort of AI 2.0. And that feels a lot like Web 2.0 to me, right? So like Web 1.0 was very non-interactive. Like here's a page, here's a directory. Remember we used to browse the internet with an alphabetical list of everything that was out there? Like are you interested in art varks? You know, that was the top left. And then Web 2.0, of course, became far more interactive. There were buttons and things that you could press. So if AI 1.0 is LLMs that do really gimmicky stuff, like write a poem for your birthday, my grandma's cards got much more interesting after chat. You BT came out by the way. What do you think AI 2.0 actually is? So is that, is that analytics? Is that uses in like health care? You talked about agentic a lot. So an agentic AI is of course an AI that does something for you based on things that it knows about you. So it actually takes sort of independent actions. It's your agent, you're the principal. What do you see 2.0 is looking like? So let's talk for a minute about 1.0. And I like, I love the web comparison for AI, by the way. And I don't know if you're old enough to remember, but like when the web first came out, it was very much like just add the web to everything. Like the example that comes to mind is that people used to make fun of is like in the mid 90s, Pizza Hut had a website. And it was basically just like, what's your local Pizza Hut? Okay, here's the phone number for how to call them. Give them a call. Yeah. And you're like, wow, the information super highway in action. And I think that's very much where we're at with AI where it's like, oh, everybody, like just get your AI up and running. And by the way, I think one of the challenges we're seeing right now is there's a lot of investor excitement about AI. And I think there's a lot of consumer backlash is maybe slightly too strong a word. But I think when a lot of consumers hear the word AI, they have a neutral to negative reaction. And it's just like, it's been shocking to me how much there's been like, you can have an AI PC or like Google now with AI, like your phone is full of AI. And like, I don't know if it's nobody gives a shit or they actively don't like it. But it's not having the impact on consumers that it's having. It's neutral at best. At very best. I think it's actually negative. And people are like, what is the use case here? How do you, how do you square that with the proliferation of LLMs in everyday life? Like everybody chat, GPT is everything. Like you're going to email from somebody that's very obviously chat GPT. Oh, oh, yeah. No, no, like I get, I get emails that end with like, would you like me to like change the above? Like it's just, it's just brutal. Like seriously, I've had people like literally trying to sell me something with that, with like that text in it. The way I square that is the difference between somebody having a productivity tool in their back pocket that they can use as a way to get themselves ahead versus having something else pushed on them. And one of the things that happened with web is people stopped talking about web as though like web, web, web. He met a business owner and they said, I got to be on the internet. You look at them like, oh, where have you been? No, no, exactly. And so I think what's going to happen with AI is people are going to realize, you know, companies or builders of AI are going to realize very quickly that these words are getting polluted. And because they have a negative connotation with consumers, you can't just be like, Hey, everybody AI, like why, why should I care about that? Why should I be excited about that? And that is the question of what can AI do for you? And I think we need to get back to that in a big way. And like, if you can tell me, like, why should I buy an AI PC? Why should I have like your browser is AI now? Like, why do I want an AI browser? Like that, like, I don't know, like I, I cringe at just here. And like, these are real examples, by the way. Of course. This is not me making that up. So how do you market this in ways where there's actually some value to people? And I think, you know, and this is a cop out, you know, in the predictions game. But, you know, to ask somebody in 1997, what to ask them to try to predict Facebook or, you know, predict Instagram, I think we're, we're a few chess moves away from that. But what it's going to look like is a series of very specific use cases that people are excited to use. Like chat GPT, LLMs and in the chatbot capacity are an absolute killer app. Right. Like you just use it and you're like, this is awesome. I can get value out of this right away. Nothing else in the AI space is like that. And it's this weird extrapolation of like, well, it works over there. So why is it not working for my business? And so that's got to die. And I think we're going to start to see people become a lot smarter with that. And I think a wave of hype is going to get washed away as part of that. So now this is incidental to your point, but I want to, I want to jump on it because you gave me a couple of openings here. So you mentioned how the consumer sentiment around AI is quite negative. Right. When I think chat GPT, if we did a word association game, my response would be brain rot, which is a real thing, right? People, they don't flex their brain muscles. The brain isn't actually a muscle, but, you know, they don't flex their brain when you use chat GPT. Yeah, there you go. To solve their problems, right? And so they end up being less effective at problem solving, sort of in the aggregate. But then you talk about social media. And if I were to go to people and say social media, positive or negative, I bet social media has a more negative connotation these days than AI does, but it's still tremendously lucrative. They sell the crap out of ads on Facebook and Instagram and TikTok and all of these different sites, despite being assailed by regulators and by moralists, you know, by pundits, by podcast hosts. It's still just a phenomenally profitable business. Meta is raking in money and then they're shoveling it into the metaverse furnace and the AI furnace and they're burning it. But like, do we actually need to have positive sentiment around AI or can it be like a social media thing where it becomes so useful that it just breaks through even though we all hate it? So I think that's a really interesting question. And I want to like just sit for a moment with like social media and the fact that like everybody hates it, myself included, I imagine you hate it too. Like this video, by the way. Yeah, like and subscribe, like and subscribe. Social media is like very, very oligarchical right now. Right. It's consolidated. It's consolidated within a handful of big tech firms and it's extremely lucrative for those firms. And the way that they've made it so lucrative is that they've built it completely around engagement and attention capture, right? How do I keep you on my platform as much as possible? And you know, there's all sorts of negative societal outcomes around that and we could spend a very long time talking about understatement of the show. Well, and I don't think anyone's watching this and it's like what? Yeah. Social media having a negative impact. Controversy Facebook? No, no. And so I think we're seeing AI go down that road. And even with some of the new releases of chat, GPT and Gemini, you can see in the way that they're structured, they're more structured around engagement, right? How can I be effusive in terms of like Jeremy? That was a brilliant question and good on you for asking it. Maybe it was a good question. Yeah. Well, with yours, I'm sure they're always amazing. But and now that you've asked it, like, can I help you with the next thing? Yeah. Here's something that like else I can do for you. How do I keep you locked in? Well, I mean, to make it a table for you, you know, would you like it in poetry form? Yeah, exactly. Anything to keep you locked in. And one of the themes that's been, you know, depending on exactly the day, more or less, you know, in the news cycle is the the monetization of these LLM platforms and like, come on, we all know that we're going to get to a day where chat GPT is going to say, like, you know, you seem tired, but like, wouldn't you be revitalized by drinking a delicious Red Bull? Like it's just it's I don't drink Red Bull, by the way, but I do. I drink enough for both of us. That's great. It's coming and it's going to be the social mediafication of of AI of the of these, you know, these LLM platforms. And once again, like, that's why big tech is so invested in this, because they know this and they want to own the platforms. Let's talk about everybody else in the context of social media, because if you're a business, you may advertise on Instagram or LinkedIn or whatever, and you may get revenue from that, but you're certainly creating an economy where you're paying these these these rent seekers in big tech, big dollars to be able to do that. And so I think that's exactly what's happening. Like if you extrapolate that, these businesses are like sort of using social media, but not in a way where they're developing new functionality. It's marketing. It's an ad platform for what they're doing. It's a new way to reach their customers, but and it, you know, new content types of content get created. But when people are adding AI into their business right now or valuing businesses based on what AI could be added, it doesn't strike me that they're saying like, and it'll be just like how this business uses social media. Right. Like that's not the framing device. And I don't know if this is where you're going, but I kind of think maybe it should be more like that framing device. Yeah, that's it's such an interesting concept. I think you're right, by the way, you know, for anybody watching at home, like a social media platform is just a really, really sophisticated ad engine. We take the best, the brightest, the smartest mathematicians and computer scientists in the world and what problem do we have them solve? How you watch a video for just 10 more seconds so that the advertiser can cram in some additional content. It's huge business. AI, it's going to be weird when we're in that dystopian future when chat you, but he is like, you know, the delicious taste of Pepsi will surely get you through this difficult time that you're talking to me about. And there will 100 percent be stories. That's a prediction in the media. Oh, yeah. A total inappropriate ad that was like deeply personalized was targeted at somebody. But yeah, I think the social media comparison definitely it's got me thinking. And I have to wonder, you know, if consolidation in the AI space is going to be inevitable, like it was on the social media side, Facebook buying Instagram and WhatsApp, right? You know, TikTok growing pretty rapidly. And I agree with you on the innovation front. I have to wonder, though, is the network effect, which is really what drives social media, right? I go there because other people are there. Like, does AI have a similar moat? Like LLMs, I think certainly don't. I mean, what's the difference between Clod and Gemini and and chat GPT and, you know, and any of these options? Yeah. Could somebody with enough computing power just come in and dominate the game? Or is it a marketing play? I don't. Again, I think it's interesting. But, you know, the example that comes to mind is Google. Not as an AI player, but actually as a search player. Sure. Because again, and, you know, you may recall from your youth the days of like Lycos and Alta Vista and Yahoo and all of those. And Google came in and it was better and it had a reputation for being better. And it just consolidated the market under it because it was the winner. And so I think it was really effective because they actually did things differently. Right? Like not to get belabor the point too much, but like Alta Vista and Yahoo, especially Yahoo was a portal where you saw everything laid out in front of you. Google was a search engine. They had abstracted all of that away. Then Yahoo eventually had a search engine and everything. But yeah, sorry to cut you off there. Carry on. Well, so, you know, my point is that even like to me, that it's still an incremental moat in that story because they all had search engines in some capacities by like, you know, call it the year 2000. But Google was still able to run away with a victory there. And, you know, I think we've already seen you can look at graphs of, you know, the usage across the Bane LLMs now. And it's extremely consolidated. And I think it'll stay extremely consolidated. And I talked to a lot of people who are huge advocates for decentralization of AI and LLMs. And we're seeing, you know, DeepSeq was a big story this year about how you can use substantially less computing power and kind of get ahead. But people still tend to gravitate toward the known names. Right? Like I and being an incumbent, I think, is extremely powerful there. That's one of the big narratives that a lot of these players have right now, which is it's winner take all. And so, you know, whether it's LLMs or technology beyond that, you've got to invest in us and we're going to get it. Like it's a it's an arms race functionally for, you know, AI tech. And we're going to take it all. And I'm I'm very skeptical of that as well, which I'm sure you'll find shocking. Like to me, that is a tactic for raising capital and it's extremely effective. But I think it's I think we're going to see a space similar to what we see in social media with big tech where there's a few different niches that are kind of carved out and people will stick mostly with with the big players. And there's two points on that. Right. So the first is that chat. TbT was an open AI product and before 2022, nobody had heard of open AI. So like they actually became the first mover in this giant valuable market out of almost nowhere. And I say almost because they did have some big investment. And, you know, Sam Altman used to run Y Combinator, which was a startup incubator in Southern California and in the Bay Area. So they did actually sort of leapfrog a lot of their competitors and it caused a panic at Google. And now it's sort of become a resource competition. So the winners of the AI race so far are the people who can burn the most money in pursuit of the goal, right? It's very hard. You know, if we wanted to make an AI startup, we would have to raise a lot of money to have the capacity. So that's one thing. But on that thing, this is, you know, one of my predictions for 2026. So that I may be proven wrong on, which is that there's so much talk about, you know, compute and just building more capacity because all if we can only train these models better, they'll, you know, we're going to hit some sort of inflection point and we're going to race ahead and it's going to just, you know, take our stock to the moon, the singularity. I just think it's just total bullshit. I think we're reaching a point where, as I said with LLMs, we're reaching a plateau where it costs exponentially more to get marginal gains, which is the financial equivalent of just shoveling money into a furnace. And so I think the floor for entry level participants is getting to a point where despite all the talk about that, like you're not going to build a super intelligence just by shoveling more money into training data. Right. I think we're we're going back to an era where creativity and asking the right questions and structuring, you know, having new ideas like back propagation with, you know, with hint in a handful of years ago, those new ideas for how we come up with these things is going to lead to the next innovation. It's not just a raw horsepower race, which is a very inconvenient message. If your funding is contingent on just building more and more. Yeah. A lot of horses in this conversation. Of course, turns out to be a wonderful metaphor, a wonderful way to structure a conversation. The other point that I wanted to make just before we jump ahead was around consolidation. There's a regulatory angle to this, right? Everybody wants to be big, but they don't want to be so big that they attract scrutiny, right? And that's something that, you know, Apple has faced with the Epic Games lawsuit. That's something that Google faced as a monopoly in search. And I believe they actually lost that case if I'm not mistaken. And there was a concern that they were going to have to sell Chrome and perplexity wanted to buy it and make an AI browser, which, from what I understand, is your favorite thing. And so that didn't go through, thankfully. But Chrome will probably become an AI browser. So I think that if you are a big tech company, if you're watching this, Sundar Pichai, if you're watching this Sachin Adela, you know, be mindful of the regulators because we don't typically like one company sort of dominating a space so significantly. I do think so. Maybe a prediction on my end for the future is I think you're right about diminishing returns on models. I think the deep-seeking example is a really good one. Facebook, Meta had a product called Lama, which is a local runnable. Is that a correct way of phrasing that? A model that you can run locally. How's that? And for most people, it's very effective for most cases. Right? So you get 70% of the effectiveness at 0.5% of the necessary compute. Right? And I'm speculating on those numbers. So I think that there's a scenario where we have these smaller, specialized models that don't reinvent the wheel every time somebody asks them a question. I could definitely see a world where that type of LM becomes cost-effective. The barrier to entry is relatively low and the value is real and relatively high. So definitely something I want to talk about. But speaking of things that you think are bullshit. We've talked a lot about AI, but that wasn't like the biggest bet for some of these companies over the past couple of years. And the thing that really comes to mind for me is the Metaverse. And that's such a big bet. Facebook didn't rename itself AI book. They renamed themselves Meta and they were going to inaugurate this new world of spatial computing. This was pre-2022 when ChatGbt dropped. And we just don't hear a lot about that anymore. So let's talk about the Metaverse. I know you had a podcast episode this year on the Metaverse. How do you feel about the Metaverse in 2026? I think I know the answer, but I'm legally obligated to ask the question. Yeah. So I mean, one of the things we've found is that and I talked about this with like a backlash against AI, a consumer backlash is Metaverse has had such a strong backlash that people won't even say it anymore. They'll say like XR, which is like augmented reality and virtual reality because Metaverse is just like so tarnished. I've had a few conversations with a few kind of leading thinkers in mixed reality this year. And, you know, a couple of things I should say, one of them is that those episodes don't perform very well on social media. People don't like to watch. Which tells me that people don't give a shit about mixed reality right now. Or maybe Meta wants you to talk about AI and is deprioritizing. Maybe. Yeah. I mean, AI certainly has taken a lot of the air out of the room for like every other emerging technology out there. If there's no air in the room, how will the horse survive? Sorry, I think I've lost the plot on this. Yeah. Which which horse is that? And so there's a very, you know, good chance that a horse comes from behind in this race. Dark horse. Yeah, dark horse. OK. That's a different emerging technology and captures the popular imagination because nobody's paying attention because they're all focused on AI, which is the name of the winning horse, of course. But I don't think that dark horse is going to be mixed reality. I don't think it's going to be the Metaverse. And coming back to a point I've made a few times already. Look, I don't think anybody has answered the question of why the hell should I use mixed reality, certainly not virtual reality. Yeah. Getting people to strap on a headset. No. It's like it's just a no go. And there's like, I don't know, there's this weird narrative of, well, you know, the problem is the headsets are just too heavy. And they're ugly or something. If only they were less expensive and less heavy, people would use them. Bullshit. Look, use them for what? Yeah. Use them for what? Like there's just nothing to do in there that's not better on the outside. Like, oh, it's like being inside your phone. Who wants to be inside their phone? Like, I want to be farther from my phone, not deeper into it. And so and yeah, again, you know, I talked to somebody this year, you know, who was talking about the wearables, like the augmented reality glasses and my notifications are right in my glasses. In my eyes right there. You know where I don't want my notifications? Any closer to me. Yeah. I think and by the way, like I'm I'm a sample of one. But I think certainly there's all sorts of research right now that's saying that Gen Z younger people have recognized that we've become a become a bit too dependent on our phones, non-social media, and they're trying to create some distance. I don't know how well they're succeeding, but they're certainly trying. And I think we're going to start to see that spread upward through generations. And I don't think I don't think you see anyone clamoring for I wish I was more plugged in. Yeah. And that is the entire value proposition of mixed reality of like, imagine you're more plugged in. Nobody wants that. So so what's left? I don't know. That's that's my take. Well, I bet phones with ink displays to deliberately make them more difficult to use will probably outsell like the new Apple. Well, and they started right like the deliberate dumb phones or your phones in black and white, whether it's an app for that or people just getting dumber phones, piece of physical hardware that you can append. I've been advertised. There's tons of those because people don't want more of that right now. And it's interesting. I think it speaks to the development cycle on a lot of these things. So like the Vision Pro is Apple's entry into this market. And Apple is actually usually pretty good. They created two new wearable categories very subtly. 2015, the Apple Watch and then the AirPods a year later. I mean, those two things are ubiquitous. I think the Apple Watch might be the best selling watch in the world. AirPods, people laughed at them. They got stocks coming out of them. Best selling headphones in the world. If AirPods were an independent business, they'd be a Fortune 500 business and not a small one, right? Like they're massively popular, but they couldn't get it to work with virtual or mixed reality. Well, but but but hold on. The question is why did those both become so popular? And I have a very simple answer, which is their status symbols. They're their luxury goods that demonstrate status. Yeah, that's what they do. Like, yes, I have. And that's why they bought Beats because Beats is just it's a status symbol. Look at my big headphones. They're expensive. Everybody can see them because I wear them everywhere. The bass is turned up and the bass is turned up and the sound is shit. And like, I'm not going to go on my strong opinions on Beats. I did not anticipate this. I have strong look. I have a music background. I have strong opinions on Beats. That's what Apple does. Apple is not a tech company. It's a luxury good company. And that's why they keep hiring people from other luxury good companies into their, you know, kind of marketing and product functions, because that's the code that they've cracked. You cannot just apply that logic to a headset, right? What's the luxury good there? It doesn't signal any status to be putting this on. And the worst thing they're facing and Meta is dealing with this and everybody who's throwing their hat in the ring is dealing with this is the concern that if you put on these glasses, it's creepy, right? This is someone who could be filming me right now. Got a camera. How in the world does that signal status? Right? Like that's that's anti social behavior. And so this this is the moat that's uncrossable. And this, by the way, is like a microcosm of this entire force that's at play in tech right now, which is on the one hand, you have listening for what people actually want. And on the other hand, you have the question, how do I make more money and push more stuff? And the pushing more stuff is not going super well right now. It's going well overall in the sense of we just keep charging more money for the same thing and trying to juice engagement. But a lot of them are kind of starved for new ideas. And like the idea this year and in the past 18 months has been, I'll just add AI into the tech and it's like, why is there AI in my Instagram? I don't know. They're going to put them in Reese's Cups too soon. It's going to be peanut butter, chocolate and AI. And I will buy those. There'll be a subscription. It'll be $1,000 a month. Yeah. I love this Apple example because we can come back to AI and talk about that. Like Apple has one of the most widely used artificial intelligence tools in the world from like 2010 or 2011. Siri. Yeah. And I believe you had the Siri founder on the show. If I'm not mistaken, Siri does not have a great reputation, generally speaking. Right. And so you would think that the emergence of LLMs would be a great opportunity for Apple to say, hey, this product that we have that has maybe had some issues. They'll even say at keynotes, we fixed Siri this year. Sort of acknowledging that it was broken, which is a very un-Apple thing to do, by the way, because you're right, they're very focused on their image. Why hasn't this been a big win for them? Like is it because the LLM, in your opinion, isn't sort of at that high quality level? Is it because the voice of the customer is not wanting more LLMs in products? Is it an Apple related problem? I know this is kind of a wide ranging question, but surely a good use case for an LLM would be a functional assistant, surely. Just to make sure I understand the question, you're asking like, why does it seem like Apple's falling behind in the AI race? Yeah. It's really simple because they're a luxury good company and AI is not critical to what they're doing and it shouldn't be critical to what they're doing. And I honestly think that we've talked about this before. Right? Like the fact that there's this investor bonanza of like, oh, you have AI in your in your, you know, prospectus like, like, let's skyrocket the stock. Dot com in 99. Right? Same idea. It's stupid. And I think that we're going to see a bit of reckoning is probably too aggressive a word, but I think people are going to say, oh, yeah, actually, Apple doesn't need to be an AI company, not every company needs to be an AI company. And there isn't that much of a premium on it. All of this, I think, by the way, is like pretty bullish for companies like Google because actually truly being an AI company is very, very helpful. But for everybody else, I don't think it's going to see the gains that investors are expecting. And if the average company is going to get these gains, most of them, I think, are going to get it by bringing in a Microsoft or a Google into their shop anyway and get it through. You know, we talked about this earlier, a technology vendor, probably, frankly, one that they're already using. So I think that that's going to happen. It's going to happen in a lot of industries where people are saying, oh, like, do we still need consultants? Do we still need any professional services? You know, AI is here. I think that's a huge misrepresentation of why people buy these services to begin with. I don't think it's just, oh, what's what's the answer to this question? Better bring in McKinsey to answer it for me. I think it's how organizations get things done is, you know, working with some of these functions. So we'll see the stock in some of these rise that's been beat up a little bit. But I mean, the Apple piece I want to come back to because there's a different concern at Apple, which is you've got Tim Cook now, you know, functionally announcing that he's an outgoing CEO. We've got some suggestions about who may be up next. But the real valuable question is what's next for Apple? Yeah. And investors have decided the answer is AI. And I think that's dumb. And I think maybe Apple thinks that's dumb. But without being able to tell them what the real answer is, they're in trouble. And that's what we've seen depressing the stock. So if they think the problem they're solving is how do we get AI into our products? They're barking up the wrong tree. The problem that they're trying to solve is what's next for Apple that's going to make us a pile of money. And they've been really good at being like a platform, right? The App Store, one of the most innovative things, I would argue, of the 21st century so far, certainly changed all of our lives. Many of you are probably watching this on an iPhone in an app that was made by a third party that Apple is collecting a big on, right? Like that's sort of their whole thing. Search Engine, they could have built their own. They didn't. Google paid them a lot of money for it. It was the center of an antitrust lawsuit, right? So the reason I asked about Apple is because there's, I think they're one of the most compelling companies in this space. They're the 800 pound gorilla that really has not stepped into the ring. And succession is going to be very interesting. Like if you look at Amazon, who succeeded at Amazon? Andy Jassy. What was his job? Amazon Web Services. Not a retail guy, a cloud guy. Microsoft, who took over from Steve Ballmer of Nobody Will Want an iPhone because it doesn't have a keyboard fame. It was Sacha Nadella. Cloud division, right? So where Apple goes, and by the way, both those companies, massive, massive success in their cloud businesses after those folks took over, in Jassy's case, before he took over. So with Apple, it's going to be very compelling to see who they pick. As a successor to Tim Cook, who is a supply chain guy and really sorted out their supply chain. So yeah, I think that has so much to do with the ultimate success of Apple intelligence or whatever the final iteration looks like. So the way I'm seeing it, Jeff, right now is, so Web 1.0, then Web 2.0, there was a big shaking out all the pets.coms went bankrupt, but we got a bunch of really solid Web 2.0 companies like Google and Meta and Amazon, who were able to sort of ride through that storm in the case of Google and Amazon and emerge from it in the case of Meta. On the AI side, you're predicting sort of an AI 1.0, 2.0 shift. Yeah. Right. What about Web 3.0? You may recall a couple of years ago, everything was going to be on the blockchain. And sometimes the blockchain was going to be mixed with other stuff. So I'm going to ask you a loaded question and that is, can we finally close the book on this? Can we finally just say that blockchain isn't a real enterprise technology and just move along? I certainly agree with that. At the broadest scale, there are industries and there are specific functions where you can use a decentralized ledger and there's some value to it. But I think. Can you name one? I've heard examples in sort of finance and insurance internally where they keep records like this. I'm always skeptical when I hear those stories. So we'll have to look something up. I am too. And should it be uttered in the same breath as AI? No, I don't think it should. And we can decide how much we want to talk about crypto, but I think we've seen. Crypto is a whole different thing. Crypto is a speculative. I think it is too. I think we're sort of seeing peak crypto now as well because this is another thing. Well, we're over the hump. I mean, again, the price may be different depending on when you're watching this, but in late 2025, crypto crashed a fair amount. Yeah. And this is after, by the way, all this regulation came off. You've got a very pro crypto administration in the US. And to me, like I've been a big crypto skeptic for a long time. And it's funny because like some of the smartest people and some of the dumbest people I know are all in on crypto, which is like fascinating. Like there's no middle ground. After the show, I'm going to find out which bucket I'm in. Yeah, exactly. So it's like, you know, geniuses and people who have never known anything about finance or technology or like putting their life savings in this. There's a meme with a bell curve with people. I'm sure there are. And it's the same concern we talked about with AI, which is it's really great as a speculative investment. And like the future is crypto, but who uses crypto that isn't a drug dealer? You know, like it's just it's a technology that the average person I don't find has any use for. And they can't even describe in layman's terms why they should care about it. Like if you want to con someone out of their money, crypto is incredible for it. I get those calls. Yeah. If you want like, yeah, if you're in the illicit goods or services business, you know, you want to order a hit like Bitcoin, have at her. But but you know, is that what the valuation is based on? Like, come on. And I'm sure there's like people who are going to be like furiously like this Jeff guys, an idiot. He just doesn't understand its potential. And there's a lot of money riding on that narrative. But I don't know. I like that's that's my crypto take. So crypto, I'm completely with you on the enterprise side, though, in terms of like actual blockchain, the purpose of which is to solve some sort of a business problem. I think I disagree with you because I actually think that if you peel back the layers on any enterprise blockchain use case, it becomes apparent that there is a better way to solve that problem. I'm not saying that a decentralized ledger can't be used to do stuff. I'm just saying that it is very rarely the highest and best use of your time to actually build one. So I'll give some examples just because like I came prepared, Jeff. Everybody watching this Google Australian securities exchange blockchain. This was a slow motion train wreck from me because I heard that they were moving their trading platform. I might get some of the details wrong, but I heard they were moving their trading platform from a traditional system to a blockchain. And I watched this. I was like, there's no way they're actually going to do that. But they kept releasing announcements. You know, it was a big deal for their executives. I was like, this just isn't going to happen because this isn't a real solution to a real problem. They kept announcing it, kept announcing it. Boom. Actually, no, we're not doing that. Whoops. And it's like, yeah, obviously that wasn't going to happen. Or a few years back, IBM and Mayorsk, right? They partnered to use some IBM technology to build a platform called TradeLens. Right. And it sounded really good. It was like shipping containers sometimes get lost. You know, it's great to have a immutable ledger and, you know, people can scan stuff in or whatever and track it around the whole world. You know, biggest shipping company in the world. A questionably relevant tech company, but still quite large. IBM is not anywhere near the size of like an Apple or a Facebook or I guess MetaNOW or Microsoft, but they're still a big player. They shut that down. It wasn't a real thing. No. Right. And every time this comes up, it always comes back to that would be neat if it worked, but like it's just never the best way to solve a problem. So I agree with you, crypto, not great enterprise blockchain. I wish we could just close the door on this. Like if I never have to hear about it again, I know talking about it on a podcast is not the solution for never hearing about it again. But if I never have to hear about it again, it'll be too soon. But there's an implication here that really concerns me. Okay. Which is that in all these use cases where you're like you're watching it from a safe distance and being like, why would you do that? It's obviously not the right call. The answer to that question is that somebody somewhere with a lot of authority said, how can we get blockchain into our business? What's the case for blockchain? Like this is a company that should be using blockchain. Damn it. We're backing into it. Yeah. Well, exactly. And it's like, well, you know, sir, it's always a sir. You know, we've got our top minds on it. And you know, this is the example we came up with. And like if you squint really hard, it's not that stupid. If you squint really hard, it's not that stupid. It's not that's like that's kind of like stamp that on the bitcoins. Yeah. Yeah. There's a there's a lot of enterprise projects that could be the slogan for fair enough. That like is starting to sound familiar to me, right? Because that's some of the concerns we're having right now with AI. And I think if the tide is going to go out at all, we're really going to like, we're really going to see how many of these projects have merit versus how many is like the AIification of, you know, what's effectively blockchain. So I will say, and I'm a little biased here, I do personally use AI for things. A fair amount. Sure. One of my favorite things to do is ask it if certain subway stations have bathrooms, for example, it actually does know. It's very smart. It's faster than Googling it. Bathrooms at subway stations around here, any in some of them? Really, I'd be like very concerned at the like the cleanliness level based on based on some of the things I've seen. And that's definitely the direction that I wanted to take this this conversation. I have yet to find a valid use case personally for blockchain. And then I'm also going to give AI the it's early excuse, because it sort of is. I mean, AI has been around in various forms since the 1950s, Marvin Minsky's lab or whatever. But like the modern iteration of it, we're in an AI summer, you know, we are there for a long time. And so the past three years has been sort of AI 6.0 or whatever we're on right now. And that's been going quite well. But blockchain, I mean, 16 years. Oh, yeah. And we have yet to actually come up with a viable use case for it. And so I was hoping in this podcast, we could just collectively shut it down. Oh, yeah, like if you're like, there is nothing between me and shutting the book on blockchain. Like, like, I don't think I mean, I'm curious. I've read a handful of like 2026 tech trends. I don't think any of them are like blockchain. Like this year, we're going all in on blockchain. Like NFTs are back, baby. Yeah, speaking of scams. So what did you do with your massive portfolio of NFTs? Conned them off on another sucker back in the day. The greater fool theory. It's sort of how that works. So yeah, shut the book. Yeah. All right. Book is closed. Our editors will put a book on the screen and we'll close it. I love that. I wanted to talk about a couple of other things that are sort of like blockchain in the sense that they've been much hyped and are maybe related to or adjacent in some way to artificial intelligence. So one is self-driving cars, which is an agentic AI use case. Right. You know, the AI is an agent is acting as a driver on behalf of you, the passenger, you're the principal. They've been around for a long time, you know, self-driving in various forms, depending on what level you would describe it at. It's been around maybe in its modern iteration for, let's say a decade. Uber was involved and they got sued. Google's got Waymo. Tesla's have this thing called full self-driving, which comes with a warning that says, this is not actually a fully autonomous driving system. You must be present and we'll shake the wheels that, you know, we know you're still there. What are your take takes on that for 2026? Are we going to just ditch the car and have robotexies? That's a really interesting one. I was a self-driving car skeptic for a very long time. Like to me, I put it in like the cold fusion bucket where it's like, it's just two years away. Like, like, whatever, five years away, no matter when you asked people. And like Elon was like the... I made a slide where I just clicked for every year. He said it was one year away. No, you can... It was a running joke for a really long time. I've come around on that in 2025. OK, you're more bullish. I'm more bullish and like all go as far as to say like self-driving cars are an inevitability at this point. OK. Like the proliferation of them is inevitable. And I think Waymo is probably the first company that's really proven that out at scale. And you're starting to just hear more stories of people, you know, on the West Coast or visiting the West Coast or now, you know, now they're in a few more cities. Austin. Yeah, Austin, I think they're in or coming to Phoenix. Arizona's got good weather for it, though, right? Well, it does. And that's to me, that's a surmountable problem, though. It's like, you know, road conditions, weather, topography, like all of that will come with time. I don't think there's anything inherently insurmountable about it. But what's interesting and what changed my mind is the virality of the experience. And what I mean by that is any time I know someone who's in a self-driving car, I know they're in a self-driving car because they told me a picture of it. Yeah, right. And they're like, no driver. There's no driver. Look at this. Like people love to talk about it and they're doing it. And if you ask them, would you do it again? The answer is yes. Right. Like those are signals that this is not going away. And so we've already started to hear about organizations like Waymo talking about bringing this to more and more cities. Yeah, there's concerns like weather, like topography. There's like no shortage of concerns. But to me, those are like hurdles that we will get over. I don't know that we're going to get over all of them in 2026, but we're going to continue to see a steady march forward here. And as you can imagine, I wasn't around when that first Model T rolled off the line. But to get from that to the modern age of highways and cars being a part of everyday's life, everyday life in America and around the world, that didn't happen in 18 months. It took decades. And this, I think, will be faster than that, but it's going to be years and years. And it's going to be transformative for our society and for, I think, in a lot of ways, our physical infrastructure in ways that are really tough to predict right now. One of the pieces, though, that I've been thinking about lately is that this is especially true in the Western world, is with a lot of new immigrants to the U.S., Canada, Western Europe, one of the lowest barrier to entry ways you can start earning an income is as an Uber driver, or as some sort of, you know, basically, you know, just food delivery, food delivery, taxiing people or stuff around. And parcels is the same, right? Amazon has an awful lot of humans that are serving as the last mile. If you don't need those people anymore, what are we doing with these people who are functionally self-employed? And this is their skill set. Oh, and by the way, they're saddled with this large asset that they had to purchase to be able to do that. A vehicle of some sort. Yeah. So that's a big societal question that I think we're going to have to grapple with. It's going to be asked more in 2026. I don't know that there's going to be, you know, a volcanic moment, but we're marching in an inevitable direction. So the reason I brought this up was because of exactly that point. I'm glad you sort of came to it because when I was recently in one of the cities, Austin, that has a large fleet of Waymo's, it's integrated with Uber. And you order an Uber and a Waymo might show up or a person might show up, right? Depending on where you're going, availability, I'm assuming there's an algorithmic aspect to it. In fact, there obviously is. And I didn't actually end up riding in one. But I did ride in a regular Uber for an hour for like $38 or something. And I thought, oh, I think the price of Uber's in Austin might have taken a hit because, you know, the drivers, you know, want more money. They're like, no problem. I've got a fleet of autonomous vehicles that don't get tired. And that was why I brought that up. I think it's in microcosm, the scenario that we're talking about with AI sort of writ large. And this just happens to be a particularly useful case that solves a problem. It will displace workers if it's done correctly. Never listen to an AI CEO who says, oh, the goal is in displacement. No, the problem that they're solving is salaries, its wages, its commissions, whatever. And I think that self-driving is one such case. I'm less bullish than you, though. I do think it's asymptotic. I think the reason that we've seen it in Austin and in Phoenix and in California is because they can mostly solve for those conditions. I say mostly because there are still relatively high-profile problems. Although, as far as I know, nobody's been killed recently. There was an Uber-related death in Arizona about eight years ago. But as soon as you complicate, like, I think that driving in Ohio in January is exponentially more difficult than driving in Chandler, Arizona at any time of year. I know this because we live in a northern climate and I had to drive to get to a train to get here. And people don't know how to drive in the snow. I don't know that a robot will be able to do it when it can't read the lines on the road and it's got to make a judgment. When it just pull over, throw the hazards on and wait for this all to blow over. So that's a really interesting nuance to it because I'm framing it as snow is incrementally more difficult. You're saying, no, it's more than incremental. Not an engineer, but... Well, and yeah, somebody probably smarter than us are more knowledgeable and not a CEO because they'll probably just tell. Of course, it's just around the corner. This cold water is going to stop me. Get out of my way, snow. Put a cow catcher on the front and get me to the mall. Let me frame my position a different way. It's very hard for me to believe that the Waymo's and Ubers of the world in 2026 or 2030 or whatever year are going to be like, you know, I guess we're just throwing in the towel. We can't beat snow, snow wins. So that market is closed. Like, I think we're going to just bang the hammer at this until at some point we reach a breakthrough. And I don't know what year that will be in. My guess is sooner rather than later. But, you know, if your guess is later, then here we are. Yeah, it is later. And I want to make one more point on this before we transition to our most exciting, you know, predictions for 2026. So the things that we're most excited about going forward. So think on that while I make this point. Apple looked into building a car, right? The Apple car. They never released anything publicly about it. Apple is like notoriously tight-lipped. Like whenever they acquire a company, they release the same statement and they have for like decades. It's like two or three sentences that's basically like leave us alone, journalists. And so they didn't talk about it. But like there's been a bunch of reporting on this. If you Google the Apple car, you'll know exactly what we're talking about here. And they just dropped it. One of the most well-resourced companies in the world entering a market that exists, that they did not have to reinvent literally the wheel. Right? Oh, maybe that's part of their problem. Maybe they didn't know it was going to be a triangle. It's going to be revolutionary and creative. They just dropped it because they said, you know what? We actually can't do this. Billions in investment, thousands of employees were involved in it. And they just dropped it. So like Google saying actually this Waymo thing isn't scalable and we're not making any money doing it. We're pulling the plug. Like I don't think that's likely. But like, you know, too big to fail isn't a thing in tech. Everyone has their moment. I well, and I'm skeptical of that for a couple of different reasons. I mean, we're talking about in a lot of cases hardware versus software. Sure. And the hardware investment for Apple was very, very high there versus. But they're a hardware company. I mean, they are a hardware company. But I think, you know, and it's funny that you talk about Apple's being notoriously tight-lipped because the corollary there is like, well, why did they shut that down? And because they're tight-lipped, they haven't been like, well, because of X, Y and Z. But if I had to guess at some point, somebody did a sober analysis and said, like, what's like an optimistic guess for how many of these cars we could sell in the next five years? It's an ROI calculation. It's an ROI calculation. But Google is going to make that with Waymo at some point or Alphabet's going to. They're going to look at it and they're going to say, we've been throwing a bunch of money into this. They're not profitable right now. I would be shocked if they were profitable. So like at some point, some executive is going to say, like, should we keep throwing money into this, you know, 55 gallon drum that's already on fire? Right? Maybe. But I think there's a more subtle pivot they can do, which is which is the most profitable piece of this. Yeah. And at some point, while it's not buying or designing cars, it's like a software component of other cars and they're able to pivot. And we could call it car play. Yeah. Yeah. Well, exactly. Right. And so that's where it can potentially go. But the other piece I don't want to lose sight of is, you know, if all this comes to pass and you end up with all these kind of unemployed, low skill workers, this to me is broadly the risk being created by technology today. Right. Because that's one example. That's always been the risk. Like so when somebody invented the combine, I needed fewer people to harvest my field. Right. Yeah. Yeah. And there are implications for that. And when somebody invented, you know, the keyboard, the QWERTY keyboard attached to a PC, I didn't need a person to take a letter anymore. Right. Like this is just the history of technology inevitably. Well, and but it's also the history of, and I don't want to get like too much into like political or social theory, but it's just like it's always the bottom of the socio economic ladder that gets, gets chopped first. Right. Like the, and I don't know, this was like one of the many lessons for me coming through COVID is it's just like, I don't know, anytime anything happens, the rich get richer. Yes. Right. Like it's just like it's become very difficult for me to imagine a world where like something happens and that's not the case because of me shuts down, rich get richer. Economy booms, rich get richer. Well, well, exactly. Like they're just it's so well insulated right now in the, you know, political economic system that's been created. And so your point is well taken. That does not make this moment in history unique, but it makes it flammable. I guess I could say. So, so my argument on this and I have a chart I like to show and it's the adoption of mechanized agriculture tools over time compared to be subordinated essentially, right? You know, I used to have an ox to till my field. Now I got a mechanical plow and like that took decades in the United States, right? And there's a point at which the line crosses over. It's all very exciting, but like people coexisted. You got it. You got an ox or a horse. I've got a tractor. Now everybody has tractors, but it took decades with AI. It's like they invented that thing like a year ago, you know, three years ago in the case of chat, GPT, and it's like, well, now we can do the job of a translator. And now we've got a bunch of unemployed translators, right? And like the technology can propagate so quickly in the scheme of things that I think you are left in that transition period with a lot of people who are impacted all at once. And you know, again, not to get too far into social and political theory here. But like when you have a lot of unemployed, relatively young people, like it's not good for your society. They tend to, you know, move fast and break things as far as I, as far as I understand. And so, I mean, the tech leaders have been talking about UBI, but that's very politically unpopular, universal basic income. And so I can't see that happening for political reasons. And so like, yeah, you are left with a disenfranchised class whose work has been essentially subsumed by artificial intelligence. And so, yeah, I think that that is a super compelling point and not the note we want to end the show. Well, but just on the UBI, the UBI point, and I'm a bit of a UBI skeptic, not because I think it's necessarily a bad idea, although, you know, we'll leave that to the economists. But because I think it's extremely difficult to implement, you know, pragmatically, and every time we've done some sort of experiment with it, it's been shut down and, oh, we'll have to figure it out next time. Well, that in and of itself is extremely telling. Yeah, it's political reasons. But there's something else which is, I think, sort of a soft UBI, which is the rise of the public sector, like literally as a device to employ people to prevent unrest. And by the way, like we used to have this really big mechanism in a lot of countries to do this, which was called the military. Yes. Right. And like, oh, we've got all these troubled young men. What can we possibly do with them to keep them away from crime? OK, well, we can give them some discipline and we can point them at the enemy. Right. And I don't know. Maybe that's a controversial thing to say. But but it was an extremely it was an extremely effective mechanism for doing that. Many countries have national service, so you might not be in the military, but you might be called upon by the government to do something like, you know, labor. Sure. And well, and we've seen, you know, we're at an interesting, you know, point in modern history where we've seen just kind of the erosion of military budgets and, you know, military, you know, staff size for so long. And I think that like that curve is starting to bend up again. Civil service is also shrinking in a lot of places as well, because it's not politically popular to have a large civil service. Right. Well, and I agree with you. And so these are going to be like two forces that are that are probably colliding because sure, it's not popular. But if you have, you know, mass unemployment, what does that look like? And it's, you know, it's somewhere between funny and depressing because. You can talk to the leaders of the tech of the tech companies making these products and they're like, yeah, it'll probably lead to a lot of unemployment. Don't know what we're going to do about that. And it's not my problem. Well, but it's a, you know, it's a tragedy of the commons type problem, because if there's less people with money for consumption, then. What happens? How does the markets stay buoyant? So, so that's a very real concern. Well, one point that I'll make on this is I think for one of the first times in history, the technology being proposed has the potential to impact actually relatively high wage, high salary, high status occupations, right? Like if I make an AI that can read an X-ray with, you know, 99.99999% effectiveness or whatever, like that radiologist who might have been pulling in half a million or three quarters of a million dollars a year, all of a sudden their livelihood is impacted and those people actually tend to have a lot more political power. So we could see a situation where they're able to leverage that. I agree with you that historically technology is also that the radiology one is interesting. And I, you know, I was talking to someone last week who was saying that if you look at the studies in the last handful of years, radiologists are using AI more than ever to help, you know, with imaging, which is a really, really good thing for, you know, everybody's health, but it hasn't led to fewer radiologists. It's actually led to more radiologists and led to is doing led to is maybe it has not stemmed the flood of radiologists. I don't want to imply causality there, but it has not decreased the number. That explains why when I look left, I look right radiologist, radiologist. All radiologists. They're everywhere. Yeah, I moonlight as a radiologist. No way. Me too. That's amazing. Yeah. So I think that's such a compelling example. So it was the same with ATMs, right, in the 1970s and 1980s. There are more bank tellers now than there were back then. The nature of their role changed. Again, it's that speed piece, but I do think that the political power that the money classes have and this podcast got really interesting. The political power that these money classes have does make it likely that there might be a society-wide solution or at least a serious discussion of one. Whereas historically, maybe there wouldn't have been. Well, you know, one of the other interesting kind of corollaries of that is people have started to talk either seriously or facetiously about like, well, are we ready for AI CEOs then? Yes. Sundar Pichai came out and said, AI could do my job. And he was probably just trying to sell his AI. But well, and it's really interesting because I think to answer that question, you have to really soberly ask yourself, what is a CEO good for? And if you look at a lot of the day-to-day tasks of a CEO, you're like, yes, AI can, if you think their job is to come up with a strategy and communicate it or whatever. Yeah, there's a lot there that an LLM can do. It can do a good impression of the CEO. Effect, similarly. But if, you know, to me, what is the core of a CEO is inspiring confidence and raising capital. Getting fired when things aren't going well. And getting fired when things... You're a throat to choke there, right? It's accountability and projecting confidence so that you can get more investment and raise the value of the organization. It's not just strategic leadership. And strategic leadership, by the way, is not just a good idea, right? There's just a lot more to it than that in terms of getting it sold and implemented. And so I think while people do talk seriously about it, I think it's an injustice to the role to suggest that an AI can do it. And as we've talked about before, you know, the higher up you are, the more insulated you are, I think, in a lot of ways. Yeah. I see you smiling nervously. It's not so much nervous smiling. And so you think, oh, those poor CEOs. Yeah, yeah, yeah. Poor one out for those poor CEOs. It's an injustice to the role. Well, CEOs who are watching at home know that Jeff Nielsen, host of the Digital Disruption Podcast, has your back. All right, you will be OK. Is anyone defending those underappreciated and underpaid CEOs? Yeah, it's us here at the... We know how he voted for the Tesla pay package. We got that all figured out. So let's talk about going forward a couple of things that have us excited, because we've talked about like a lot of, frankly, pretty depressing things, a lot of very exciting things, but depressing things. I'll go first. The thing that I am most excited about for 2026 is not AI related. It's not self-driving car related. I tried Tesla full self-drive one time and it was cool, but it was kind of scary. So I think I'm probably not going to buy that. It's a full Apple phone. They're tight lipped notoriously. They haven't come out and said we're going to make a folding phone, but more companies are making these. Samsung just released a really thin, full-D phone, very cool. They're expensive. But like when I can fold my iPhone open and it's going to be a tablet, that's the thing that I'm most excited about. And sometimes I have dreams about it. I come into the office and I look at my tiny little iPhone mini. And I think if only this were bigger, you know, you can get these right now from companies that aren't Apple. We'll put a pin in that. We'll come back to that discussion. I'm very excited about the ecosystem play that is iMessage and I've got all these books in my Apple Books library. Can't get that on Android. The service life is incredible. And usually they don't bring something to market until they've worked out a lot of the kinks, not exclusively, but usually they're pretty good. Actually, I'm not scared of a first gen Apple product in the way that I am for other manufacturers, although we're on like gen nine. So that's the thing that I'm most excited about. I'm anticipating this is sort of prediction slash excitement. I'm optimistic that in September of 2026, Apple will say, you know, we think you're going to love it. It's our foldiest iPhone yet. No headphone jack. I can't wait for that. What are you excited about? I'm skeptical that you're going to get your folded. Your folded phone. All right, I'm really skeptical that you're going to get it. But I'll I'll I'll pray for you. Thoughts and prayers like just I hope you get your your folded iPhone. I'm like just Android to the bone. So like I don't have a horse in your like full the Apple race. You thrive on the instability. I thrive on the on the flexibility. Well, the folding phone is going to be very flexible. Well, there you go. It's going to and you can get it now through a series of Android phone manufacturers. How many easy payments of? Yeah, no, it's all it's all easy payments. I don't know that I have a technology right off the bat that I'm maybe not necessarily a technology, but like is there something that you're excited about? Are you excited about AI 2.0, for example? Yeah, yeah, I'm a guy, as you may know, who like is just like really irolly at like bullshit. And there's just been so much the past two years around AI and the tide is coming out. Yeah, when we can start to have real conversations and people don't talk about AI like it's the metaverse or like it's blockchain, when it's what can this actually do, what can it not do, what are we going to invest in? Like when it's a real practical conversation tied to the capabilities, I'm really excited for that. Like the hype machine, I think, is running out of gas. Yeah. And yeah, when when like companies are like, oh, it turns out people don't as consumers just want to pay for stuff with AI tacked on to it. You don't want an agentic version of Windows? Yeah, exactly. So you were one of the hate replies on Twitter when they announced that. I was all of the hate replies on Twitter. No, I just it's like again, it comes back to like. Why? Like what are you doing for people that they should be excited about? Versus what are you doing for investors that they should be excited about? And the investors are going to run out of steam at some point. If the money from consumers doesn't follow and I have money in stocks, I don't I don't want the bottom of the market to fall out this year. That's not that's not good for me and my family. And it's a discounted opportunity. Like we could unpack a lot of that, but that's probably not the road we want to go down right now. But being able to start having real conversations and talking about what this technology can do. And I think that's going to be a really good thing. And regardless of exactly how the market performs next year and in the next year, I think we're going to I think that's going to be a very good thing. Fantastic. Well, I'm glad you forced us to end on that note, which is positive in a sort of skeptical way versus just our kind of doom outlook that we've been talking about for a lot of this. Thanks for moderating that. I really appreciate this. And for everybody else, stick with digital disruption for all things future work, future of tech, what's coming down the pipeline next? And what's that other thing that I was just to say? Oh, yeah, don't forget to like and subscribe. Thanks for watching. I'm going to be back with a new video next week. I'm going to be back with a new video next week. I'm going to be back with a new video next week. I'm going to be back with a new video next week. I'm going to be back with a new video next week. I'm going to be back with a new video next week. I'm going to be back with a new video next week. I'm going to be back with a new video next week. I'm going to be back with a new video next week. I'm going to be back with a new video next week. I'm going to be back with a new video next week. I'm going to be back with a new video next week. I'm going to be back with a new video next week. I'm going to be back with a new video next week.