Oxide and Friends

Predictions 2026!!

99 min
Jan 8, 20263 months ago
Listen to Episode
Summary

The Oxide and Friends team makes predictions for 2026, 2028, and 2031, focusing heavily on AI's evolution from hype to practical tool, the emergence of coding agents as standard practice, and potential corrections in AI company valuations. Key themes include the normalization of LLM-assisted development, custom software replacing SaaS, and growing concerns about AI-induced psychological effects and economic disruption.

Insights
  • LLM-assisted code writing will transition from controversial to standard practice within 3 years, similar to how syntax highlighting became invisible infrastructure
  • The 'vibe coding' term will become pejorative as rigor and testing become mandatory with AI-assisted development, requiring new terminology
  • AI companies face a 'Challenger disaster' moment due to normalization of deviance—years of safe prompt injection practices will eventually cause a high-profile security incident
  • Custom-built software will increasingly replace SaaS as LLMs make bespoke solutions cheaper than generic platforms, particularly in vertical industries
  • The AI industry is entering a dot-com correction phase where companies like Harvey (legal AI) will fail despite massive valuations, but foundational AI models remain valuable
Trends
Shift from AI hype to practical tool adoption in enterprise and developer workflowsConsolidation of AI safety concerns into regulatory and diagnostic frameworks (DSM-5 updates for LLM-induced psychosis)Rise of conformance-suite-driven development enabling rapid creation of new protocols and languagesDecline of consumer smartphone innovation driving Apple and others to explore new form factors and business modelsVertical SaaS disruption through custom AI-generated software tailored to specific industriesEmergence of AI-generated open source slop creating demand for proprietary/branded software with provenanceWaymo and autonomous vehicle adoption reaching critical mass in specific geographies (SFO airport demand)Economic anxiety shifting from AI doomerism to livelihood concerns and job displacement fearsRegulatory and political backlash against AI companies requiring third-party credibility (Pope, Obama) for economic messagingData acquisition frenzy by frontier AI labs buying infrastructure, archives, and data sources indiscriminately
Topics
LLM-Assisted Code Generation and Coding AgentsAI Safety and Prompt Injection SecurityNormalization of Deviance in AI DevelopmentCustom Software vs. SaaS Business ModelsAI Company Valuations and Dot-Com Correction ParallelsSandboxing and Code Execution SecurityVibe Coding and Software Development TerminologyAI-Induced Psychological Effects and Deep BlueAutonomous Vehicles and Waymo AdoptionOpen Source Software Quality and AI SlopFrontier Model Company IPOs and S-1 DisclosuresAI Regulation and Political BacklashJevons Paradox and Software Engineer EmploymentBrowser Development Using Conformance SuitesAgent Orchestration and Multi-Agent Systems
Companies
OpenAI
Discussed as potential IPO candidate; subject of predictions about S-1 disclosures and economic model viability
Anthropic
Mentioned as frontier model company with potential IPO; known for Claude and vending machine agent experiments
Harvey
Legal AI startup with $8B valuation predicted to fail in AI correction; emblematic of unsustainable AI company valuat...
SpaceX
Potential IPO candidate; S-1 filing expected to reveal Cybertruck purchases and Elon enterprise cross-subsidies
Tesla
Predicted to exit consumer car business within 6 years, pivoting to batteries and fleet sales instead
Apple
Facing smartphone sales decline; predicted to pursue new flagship form factors and business models post-scandal
Waymo
Autonomous vehicle service predicted to reach demand saturation at SFO airport with 10+ minute wait times in 2026
Salesforce
Cited as successful example of customizable platform with 50,000 professional customizers at Dreamforce
NVIDIA
Stock valuation predicted to peak in 2025; facing competition and headwinds that will limit future growth
Friend.com
$129 AI companion pendant predicted to have under 10,000 activated devices by end of 2026; subway ads defaced in NYC
Shopify
Cited as successful example of sandbox-based plugin architecture enabling customer customization
DeepSeek
Mentioned in context of LLM reasoning in non-English languages and model behavior analysis
Oxide
Host company; discussed custom software replacement of SaaS tools and LLM-assisted development practices
Iron Mountain
Speculated as potential acquisition target by AI companies seeking historical data and archives
Clubhouse
Historical example of well-funded startup that quietly declined; parallel to predicted AI company failures
Pets.com
Dot-com era example of unsustainable business model; Harvey compared as modern equivalent in AI space
Wall Street Journal
Reporters used Anthropic's vending machine agent to test its security through social engineering
Humane
AI wearable company mentioned alongside Friend.com and Rabbit R2 as failed form factor experiments
Rabbit R2
AI wearable device mentioned as example of unsuccessful AI companion hardware trend
Zoox
Autonomous vehicle service mentioned as alternative to Waymo in San Francisco Bay Area
People
Adam Leventhal
Co-host making predictions on AI acquisitions, LLM programming languages, and Tesla's consumer exit
Brian Cantrill
Co-host predicting AGI/ASI terminology shift, AI company acquisition binge, and challenger disaster security incident
Simon Wilson
Guest making predictions on coding agents, agent orchestration, and software engineering as typing becoming obsolete
Steve Klabnik
Guest with optimistic 6-year prediction that AI won't collapse economic/governmental systems
Ian Cutress
Guest predicting Waymo demand saturation, Friend.com device failure, and Windows dominance on Steam
Andrej Karpathy
Original author of 'vibe coding' concept; tweet misinterpreted by many regarding LLM code generation
Chris Dixon
Author of 'Read Write Own' book repeatedly criticized by Adam as poorly argued Web3 advocacy
Molly White
Blogger cited for critical analysis of Web3; referenced in context of Adam's book criticism
Evan Ratliff
Shell Game podcast host; created AI voice agent and AI-only company; invited as future guest
Eliezer Yudkowsky
Author of AI doomerism book 'If Anyone Builds It Everyone Dies' that Adam hate-read
Sidney Dekker
Author of 'Drift Into Failure' about normalization of deviance; recommended for AI safety context
Gene Kim
Co-author of 'Vibe Coding' book; one of three authors/two publishers misunderstanding the term
Sam Altman
OpenAI CEO; subject of Adam's daughter's prediction that he will go to jail
Dario Amodei
Anthropic CEO; mentioned as publishing essays attempting to justify AI economic models
Jensen Huang
NVIDIA CEO predicted to hand over reins to successor within 6 years as stock peaks
Pat Gelsinger
Former Intel CEO; joked as potential NVIDIA successor but focused on faith-based LLM startup
Elon Musk
Tesla/SpaceX CEO; subject of predictions about Cybertruck purchases and enterprise cross-subsidies
Barack Obama
Suggested as credible third-party voice to legitimize AI company economic messaging
Pope Francis
Predicted to weigh in publicly on LLM economic impact; already discussed AI ethics previously
Mike Caffarella
Guest from previous year who submitted predictions for 2026 episode via email
Quotes
"I think that LLMs can write effective code, it will effectively become a fringe belief that this can't happen."
Simon WilsonEarly discussion on coding agents
"The reason coding agents work so well is that code is reversible. Like we have Git. We can undo our mistakes. The moment you use these things for something where you can't undo a mistake, everything goes to pieces."
Simon WilsonDiscussion of agent limitations
"I think we are due a challenger disaster with respect to coding agent security."
Simon WilsonSecurity predictions
"Work is very important to people's sense of meaning. Any kind of claim that like we've built this kind of super intelligence and nobody needs to work again, I think is going to be really resisted."
Brian CantrillThree-year AGI/ASI prediction
"The Jevons Paradox for software engineering would be, as this becomes much cheaper, do we do much more of it? So we're not putting people out of work because there's actually much more of it to do."
Simon WilsonDiscussion of software engineer employment
Full Transcript
Hello, Adam. Hello, Brian. How are you? I am doing well. How are you? I'm good. And the hype has been building here. Everyone has been dropping in. So showing up four minutes late is like a totally pro move. I love it for the new year. Yeah, yeah. Listen, I was going to go full like Lauryn Hill and not like take the stage until 10 p.m. You know, really just like really, uh, just really get the crowd amped up. Actually, to the point of like anger. Like what am I even here for? One year prediction. Brian finally joins the podcast. That's right. And I am joined by Simon Wilson here with me in the litter box. Simon, it's so great to have you here. Hey, it's really exciting to be here. We've just been nerding out about servers outside on the shop floor. It's been great. Yeah, so Simon was just like, hey, before we get started, I'd love to look at the machines. I'm like, okay, I got to do the world's fastest tour of the hardware. And Simon, I promise I'm going to make it up to you with a much more in-depth tour. But it is really great to have you here. Okay, Adam, I just like, I just want a little reality check with you. It feels like this year is more unpredictable. It's like there's more of a realm of possibility for this year than any year I can really remember. It feels like, if you come back from even a year in the future, in fact, I actually struggled, Adam, in coming up with like three and six year predictions this year. Yeah. because i'm like well it's kind of three-year picture that's going to be done in a year like this thing i'm thinking of i know i know it's like are you having that same like do you feel that same way totally just like everything is possible and uh you know in past years we've had like a bag limit that's like oh you can only have one crypto yeah one crypto prediction or one ai prediction and i'm like i struggled to come up with anything that isn't ai or ai adjacent and you're right. So let's reflect that we only made the bag limit mistake once. We did that with Web3 in 2022. We did a bag of you can only have a prediction. It was a huge mistake because everyone wanted to make three predictions around Web3. And instead, everyone made one good Web3 prediction. Namely, this whole thing is going to disintegrate. And this is Simon. Adam in particular made the prediction that is famous to us anyway, that Web3 would drop out of the lexicon in 2022, which ended up being dead to rights. I thought that was a bullseye. Let us not speak of your prediction last year, Adam, that Web3 would re-enter the lexicon. Yeah, no, that was definitely a dark, I mean, last year was a dark moment, but much like this year. But yeah, I thought Web3 was going to be back. I also thought a certain book was going to be on the bestseller list. And I did spend a decent amount of time validating that not only was this book not on the bestseller list, but when it was on the bestseller list in 2024, ChatGPT hastened to point out that it was annotated with the dagger, the dagger which indicates like mass, you know, corporate purchases, gaming the system. Now, Adam, I know you're hesitating to name the book because you don't want to do it any favors, but you're really going to leave people confused. You're going to need to name the book. I assure you this will lead to, if I promise you, it will lead to no additional sales. Can you name the book that you're referring to. I feel bad that I've been hating on this book literally for three years consecutive on this thing. I hated on it before it came out. I hated on it when it came out and I made the mistake of reading it. I've hated on it talking about Molly White's hateful blog on the topic and then on last year's prediction episode. But I will do it again. And I swear it'll be the last time. It was Read Right On by the illustrious Chris Dixon in his garbage book. and uh and i would like to say that you actually don't feel bad but you do feel bad that you don't feel bad like your remorselessness leaves you with some residual sense of shame i think it's bad that i'm bringing it up again that like obviously i haven't moved on there you go you know what was great is i was listening to that and i'm thinking like oh i should go check you know what i don't have to check adam's gonna check yeah we don't need to double team this one Exactly. Okay. So, and then Simon, you were with us last year and you had, I thought you were kind of hard on yourself on your predictions, but I thought your predictions were really quite good. You had a prediction. Well, in particular, you had a prediction around what agents were and were not going to be. Right. Yeah. How do you feel about that one? I feel like that one was right on the money. I feel pretty good about that one. I said that 2026 or 25 would not be the year of agents. That one, I think I got wrong because it kind of was the year of agents, but I did specifically call out that human replacement agents weren't going to happen. Coding agents and research agents were. And that I nailed, right? Research agents, the first six months of this year was all about deep research. And then coding agents, oh my goodness. Oh my goodness. And I think you absolutely nailed it. I mean, this is why, Adam, we've said this before, but we're glad that we record these sessions. So you're getting more than the prediction. You're getting the context around it. And if you listen to your context around it, you were very clearly calling out, separating out coding and research agents, which you felt had, it was funny because like, you were like, these are kind of already here already. And you realize like, oh my God, they weren't completely already there even only a year ago. They had exploded in the last year. But there is one thing I'll say, which is that coding agents are actually general purpose agents. Like, Claude Code is not about code. Claude Code is about anything you can automate by running bash commands, which is everything. So actually, if you know what you're doing, Claude code is a general purpose agent that can solve any problem that you can attach to a bash script. But I think you were the, the, the delineation that you, you had last year, which I thought was very good was these things, anything to do with money, you are not going to let these things loose on anything to do with money. And I think we saw that with a pro what's a proxy for money databases. And we saw these things deleting production databases. Right. And it's like, I know you said in the, you know, that in the, you know, in the readme, you said in all caps, do not touch the production database. And I did it anyway. And you're right. This is a very serious issue. And this is a 95 out of 100 in terms of its severity. I mean, it's just like it's comical what some of these things would do. Well, this is the thing I realized is that the reason coding agents work so well is that code is reversible. Like we have Git. We can undo our mistakes. The moment you use these things for something where you can't undo a mistake, everything goes to pieces. I think you're right. Yeah. And I think when you, you said it earlier too, that the gullibility problem was a, was a real problem. And I, the, I, um, I don't know if you have listened to the, the shell game podcast with, uh, Evan Ratliff. Oh my God. And Adam, you've, you've listened to that. Yes. You listened to that. Oh my. And I mean, I, it delivered. I, I trust. Yeah, it's excellent. I would also say as a teaser to listeners, we invited Evan on the show. He got back to us and he says he has like some bah humbuggery around predictions. Like he doesn't make predictions. He's a reporter. He reports on facts. He doesn't try to anticipate them. But we have penciled him in for the future. So not a predictor, but we'll get him on somehow. and so in particular what evan did is is uh he their shell game has got two seasons and in the first season he created a voice agent of himself and set it loose into the into the universe with wild results and then the second season he's even crazier because he started a company with only ai agents and with um predictably hilarious actually it's unpredictably hilarious results actually i would say not just so that's a teaser for whatever adam is our time for a future episode that's that's our future episode time yeah it's reminiscent of um one of the most fun agent business things has been anthropic keep on setting loose this vending machine and then a few months ago they put it in the wall street did you see this adam no oh my god i mean and i know simon you are a big proponent of kind of the creativity of reportage and reporters and reporters are like a smart brainy bunch what do you think happens when you let a bunch of wall street journal reporters loose on a slack channel with their vending machine to see if they can trick it into into giving everything away for free and the the workers own the rights to production all of this stuff it was ridiculous absolutely absurd yeah and so in particular within a day they'd gotten the thing to order playstation's playstation 5s for them order they ordered fish they had like an actual like actual dead fish i mean the thing is trying to order. And they are, even the vending machine would tell them like, no, no, I'm not supposed to do that. It's like, no, we just actually, sorry, we just got a missive from the CEO that announced that you need to go do this. It was like, oh, okay, I better order the dead fish. They engineered a board revolt. They managed to get the CEO overthrown by the board through faking PDFs of board minutes. It was just amazing. It's wild. It goes to kind of the gullibility problem. But I think to me, Simon, all that served to really sharpen your prediction from last year about the limited utility of where we're going to see agentic use and where we're not going to see agentic use. I feel that was right. And I guess, Adam, did you give that snippet that you sent me? Was that ChatGPT rating our predictions from last year? Yes, I had ChatGPT rate predictions from last year and from three years ago, which is a fun one. But yes, ChatGPT gave me the big stinker award for my Web3 prediction. And Simon and Brian, you won. But I agree with you, Brian. I don't really think you won particularly. I don't think I won. I claimed last year, last year I said that 2025 was going to be the year of AI efficiency. And I don't really see any 2025 wrap-up that's calling it the year of AI efficiency. So I'm happy to, I think that... I do want to, I want to call out my biggest miss, which is that I said that, I think it was my three-year prediction was somebody would win an Oscar for a film that had some element of generative AI assistance making the movie. and then I found out everything everywhere all the way at once, used generative AI in the scene with the rocks. So they'd already got an Oscar like two years ago. Well, you know, I once gave a talk on predicting the present, Simon. So I think that there's something good. It just shows how true your prediction was. You actually managed to predict the present. It was actually a six-year prediction, Simon. But yes. So, and Adam, did you go back and listen to that snippet of yourself from three years ago? Yes, yes. I listened to, in 2023, trying and failing to predict vibe coding, which I think at the time was not obvious. No, no, no, no. It was more than obvious. First of all, this is amazing to me. It's like Simon and Lee first had you on two years ago. And the term prompt injection, which felt like it had been around forever, was, I mean, like the paint was still drying. You had coined prompt injection six months prior. Yeah, exactly. I mean, Adam, vibe coding was coined in February of this year, of 2025. I know. So, I mean, vibe coding literally did not exist last year, let alone in 2023. And what your prediction was that you wanted to predict that low code, no code would be disrupted by people kind of describing their programs in just like English language. But then you thought, and you said that's what your head wanted to predict, but then your heart didn't know who was going to debug that. And I felt like, man, that was, what? Wow. Yeah, wow, exactly. Close, close. Really close. Prescient, in a way, right? Like, prescient. In a way. It reminds me, again, and I said this as much when I posted about it, but it reminded me very much of my iPhone prediction. in 2003, Simon, I made a three-year prediction that Apple would have a combination MP3 camera cell phone that they would call the iPhone. And it was like, okay, well, okay, you could, and then I'm like, no, but I also thought it was going to be a flop. I thought it was going to be a disaster. So it's like, no, sometimes you see like, you see the future, but then you just don't believe that it can possibly be the future. So that was very good. topic of uh apple predictions yeah ian who is in the audience today in 2023 predicted that apple would be in and out of the vr ar space in six years and i that's a lot that is a lot it feels like i mean it just feels like i mean he has certainly nailed the first half of that and i think the second half looks very very promising yeah yeah in 2024 if you remember i did the apple vr will do well enough yes a conversion and then it has not happened at all so that was a big miss yeah yeah um well we don't talk about the mrs steve because there's too many of them we really okay i'm really proud of my one year from last year though because i said congestion pricing and we'll see nyc will be an unambiguous success it will still exist and sentiment will be positive and the mayor did a press announcement like 45 minutes ago about how awesome congestion pricing and how much everybody loves it so i got that one like exactly nailed there you go well you know as as tip o'neill might have said all good predictions are local so there you go you keep that one you get the um did you catch um tom i think it was three years ago um predicted that frivolous use of llms would be in decline yes yes yeah right uh and then also predicted that like llms would make cheating rampant so there is a definitely uh but i that was because 2023 was interesting because you know 2022 we've got this kind of crypto we're all in like web 3 the height of web 3 and 2023 is really the first year that people are kind of talking about the budding power of these things yeah um but then with i i mean it's amazing kind of where we are now three years later and on the on the frivolous use of of llms and of ai um i you know the only real social media that i hang out on in blue is blue sky and it and it feels like hopelessly quaint right now I was hanging out with my nieces and nephews over the winter break, and they're very much on TikTok. And I logged into Twitter, and everything has been TikTokified. And it's all these BS AI slop videos, like everywhere, pervasive. And I had just been insulated from it. So yeah, frivolous use of AI is in a sentence. Yeah, exactly. That is definitely an ascendance. I was so unacquainted with it. I showed something funny to my nephew and he's like, oh, that's AI. I'm like, what? No, how do you know? He's like, come on. Come on. It's the cute animals. Cute animal videos are no longer trustworthy. Yeah, that's right. And it's horrifying. Yeah. No, I mean. The one purity that we had. Exactly. The foundation upon which we built this internet, god damn it. His cat videos. And you're taking it away from us. And I think it's interesting that the youngs have a keen eye for it, Adam, as you point out. The other thing that I would like to, just one other past prediction I'd like to revisit is two years ago, I predicted that the LMs would replace search engines, that search engines would feel, search engines from what is now a year from now would feel quaint. I'm definitely standing by that one. considering that my daughter needed to hop a BART train and she was using chat GPT to determine when the next BART train was. I'm like, there's an actual website you can go to, but you know what, never mind. But I think that's what I'm feeling. I'm feeling pretty good about it. Actually, she would like to point out that it was her friend that was using chat GPT. She's like that. I, of course, would go to BART.gov. I'm like, all right, yeah, sure. There you go. A couple other things listening to previous episodes from 2025 and 2023. In 2025, I predicted, my three-year prediction was a chips crisis, which I don't feel like we're there yet, but I feel like, I'm going to keep an eye on that one. I feel like that was not obvious at the time and I feel like is gaining. Are you taking credit for DDR5? Are you putting it? No, no, no, no, not yet. I think early days are positive, is all I'm saying. Okay. The other thing I noticed, and this is more of an apology, Brian, I realize every time Rust Analyzer comes up, it is treated – I always say that it is not an intervention, which does raise questions. So are you apologizing because it actually has been an intervention every time you bring it up? I just feel like the more I claim it's not an intervention, the more it seems like an intervention is what I realize. No, don't worry. It's obviously an intervention. And it's an intervention that's merited, so don't worry. Oh, and then the other one, last year, you, I guess, made a prediction in 2024 that AI doomerism falls out of the lexicon. And last year, you claimed credit for that. I still can't credit for that. Okay. Yes. I just, I mean, maybe I'm, I mean, I poisoned my vacation reading a book all about AI doomerism. Okay. Did you read Eliezer Yudkowsky's book? I did. No. I did. The whole thing. Wow. What? Okay. you so this is the second time we're talking about a book you've been hate reading in and we've only been good doing the recording this for 15 minutes i mean at some point i do have a problem yes yes this is now an intervention like you need to be i mean also like the title of the book if anyone builds it everyone dies it's like guess how good you'll never guess but that phrase appears several times in the book. Oh my God. This is the Harry Potter fan fiction author, right? Pretty much. So yeah, I'm sorry. It's going to take more than a Eliezer Yudkowsky book to get me off of my X-Risk. I think that has actually been, I think it has been replaced with the fear of economic doom rather than, I don't think people are worried about losing their lives because I think it's ridiculous. I think they're worried about losing their livelihoods, which is, which feels like it's probably going to be a theme this year. I think, I think, I think some people are going to be, you know, this is where Simon last year had his six year, his six year dystopian on the Butlerian Jihad. which, you know, it reminds me of the first time I heard of the singularity. I'm like, you know, I had to look at the Bauerian Jihad and yes, it's very troubling. Okay, so that, I think it's safe to say that we know that this year, so we had that Web3 theme in 2022. 2023 was a bit of a shoulder year. 24 and 25 absolutely AI themed. I just don't see how anyone could be predicting anything that's not AI related this year because it just feels like it's so on the mind. But that said, non-AI predictions definitely welcome. I just don't know that. So should we start off with one years? Yes, let's do it. As our guest of honor here, do you have some one year predictions for us? I've got the easiest one ever. I think that there are still people out there who are convinced that LLMs cannot write good code. Those people are in for a very nasty shock in 2026. I do not think it will be possible to get to the end of even the next three months while still holding on to that idea that the code they write is all junk and it's like any decent human programmer will write better code than they will. Yeah, it will be a, it not only will be mainstream, the idea that these, that LLMs can write effective code, it will effectively become a fringe belief that this can't happen. That's exactly what I'm saying. Yeah. And honestly, that's a gimme, I could say that one today. I think, here's one that's AI adjacent. I think this year is the year we're going to solve sandboxing. The challenge we need, like, I want to run code other people have written on my computing devices without it destroying my computing devices if it's malicious or has bugs. We have so many technologies for this right now that are almost something you can use by default. WebAssembly solves this kind of thing. There's containers and all of that sort of stuff as well. I think we have to solve it. It's crazy that it's 2026 and I will pip install random code and then execute it. It can steal all of my data and delete all of my files. Yeah, yeah, yeah. Interesting. So you think that we are going to have to be the presence, or maybe this is not an AI-related prediction, but we have to actually meaningfully solve the sandboxing problem. I don't want to run a piece of code on any of my devices that somebody else wrote outside of a sandbox ever again. Yeah, interesting. Why would I do that? Yeah, I mean, it's kind of interesting because, you know, people would talk about like, oh, you know, I can't believe you're downloading this thing off the internet and piping it through, you know, sudo bash or what have you. And it always felt like, yeah, but I know that there's like a person that wrote that and I kind of trust this thing. But now you're like, no, no, you can't. You're in this era now where, yeah, that's really interesting. Yeah, good. Good one-year predictions both. Any other one-years? Oh, yeah, I've got one more. Oh, yeah, go for it. I think we're due a challenger disaster with respect to coding agent security. Okay. And this is based on this wonderful essay about the normalization of deviance. Have you heard this phrase before? Yes, yes. This idea, it came out of the 1986 challenge disaster reports where if you have a culture, a corporate culture or whatever, that keeps on getting away with doing something that they shouldn't have been doing. Yeah. And kicks on getting away with those lapses, but the space vessel keeps on launching and it's fine. Yeah. That leads you into a sort of corporate culture level full sense of security and it's going to burn you because i think so many people myself included running these coding agents practically as root right we're letting them do all of this stuff and every time i do it my computer doesn't get wiped i i'm like oh it's fine and i just keep on going like that and i think it's going to add up i think and i i said this last year i said last year there's going to be a headline grabbing prompt injection security hole there was not yeah i've been predicting this every six months the past two and a half years this is my version of that prediction this year, I think we are due a challenger disaster scale thing caused by the fact that we all got away with these bad practices for so long and we got lazy. Okay, and so when you say challenger disaster, presumably not loss of life and property. I really hope not. Like loss of property and loss of financial things, loss of data, all of that kind of stuff. Because the worst version of this is the worm, right? It's somebody coming up with a prompt injection worm which infects people's computers, adds itself to the Python or NPM packages that that person has access to publishes itself onto into the package registries, gets pulled down again, all of that sort of thing. I think it's feasible. Yeah. And then, so then the normalization of deviance is you think that in the wake of this, it will be revealed that, oh, by the way, like internally, this was with the Challenger disaster, lots of people at the, both the subcontractor that, that, that made the boosters at, There was lots of people who were aware about the O-ring problem. A lot of people knew of the temperature sensitivity to the O-rings. There were engineers that were deeply – I mean, it's a real tragic story. There's nothing more tragic than an engineer that is vindicated by their concerns when they are overruled with executive management and they are proven correct. That can leave people really broken in its wake, and it did in the Challenger disaster. so you wonder or believe, predict that in the wake of this thing we will take this apart and realize oh the people at this Frontier Model company, wherever this disaster took place they were aware of it, they knew this you shouldn't be running codecs with dash dash YOLO but we all do guilty so guilty this year's prompt injection prediction is that one okay well I'm going to dovetail into your prediction from last year and I'm just going to predict again that a Pulitzer Prize winning journalist uses an LLM to research this story and report it, report the inside of, but yeah, that's a dire prediction. But I think that it does feel like, when also you look at, you have these big accidents when we kind of collectively get over our skis and we kind of like, we know that it's possible. We don't think it's possible. And then it happens. Simon, I've got a book recommendation for you along those lines. It called Drift Into Failure This is a book that Brian hates But on this topic I think Oh I see what you doing I see what you doing It like I not the only person here sir, who hate reads. Let me introduce it. Let us talk about, okay, yeah, Simon, yeah, Simon Decker, because it's Simon Decker, right? I think it's the- Sidney Decker, yeah. Sidney Decker, excuse me. I don't want to disparage Simon's good name there. Sidney Decker, I don't like that book, but go ahead. You did take another one of Adam's recommendations. I mean, he's, you know, maybe he's, he's, he's the trash he reads. Yeah, exactly. Um, the, uh, the, the, well, then that's a very, very interesting prediction. Um, uh, Adam, do you have, uh, I do, I do. This one might, might feel like too much of a lock, but I think that the AI companies go on an absolute acquisition binge and this is data infrastructure, e-commerce data, behavioral data, GPS data, anything that is data or data adjacent, anything that is infrastructure and infrastructure adjacent, and some shit that's just like hard to puzzle through. I remember when VMware bought Documentum, for example, it didn't make any sense. I think we're going to see stuff like that. That is to say, just they've got so much money. uh they're not enough chips to buy not enough cpu and gpu hours to buy and the money's going to go somewhere and it goes into weird acquisitions okay this i i shouldn't like i shouldn't dovetail under this or should i uh but this is like they buy iron mountain yeah it's like have you seen supermarket sweep it's like that okay but like i mean if they bought iron mountain that could be if like if open ai announced they're buying iron mountain that could be potentially like oh we're buying Iron Mountain, we're also ripping up your privacy agreements. We're going to train on all these salt mines filled with old enterprise data. Like any of the shredding companies, they buy them? They buy them. They buy like garbage companies. Okay, they buy... Anything that is a plausible source of data, they buy. They're looking for wastewater DNA samples, whatever, like anything that is construable as data, they buy it. Do they buy an entire town to see what they – we're going to see which is more valuable, like the wastewater treatment plant or the library, the town library. We're going to buy that. We're buying City Hall. That's got records. We want to consume all that. All manner of data. I think that is not implausible that they're like, look, we know that these records going back to 1850 are all printed on paper. We can buy the town and just read all the books and use that as a corpus. Yes. you know local newspapers are very cheap these days oh that's a good one yeah i have 150 years of archives yeah okay so a big target painted on anything that has data of any kind yes um all right well i am gonna make and we can kind of also because i'm sure you've got a lot of one years i got a lot of one years too so we can kind of ping pong back and forth and steve can happen here too with anyone years um i am going to uh adam in a in a classic heart v head a a dramaturgical dyad as old as time um my heart is going to predict and actually a little bit of my head my head is not which which is really bad somewhere my heart and head agree that's really really bad news um i think that vibe coding which entered the lexicon in february is more or less out of the lexicon a year from now. And I think that it's used pejoratively. And I think that, I mean, clearly, just as Simon mentioned, no doubt that LLM-assisted and authored code is here to stay. But we are going to enter a new age of rigor with respect to that. And it's going to be viewed much more as a tool and much less of a just like, hey, go build whatever you want. And even when you go, So the thing that is currently, and Simon, you had a good piece about how the term vibe coding has kind of been misconstrued as it is, that it is not actually kind of inconsistent with Karpathy's original. The problem with Karpathy's original tweet is that it was a long tweet. It was a lot longer than 140 characters. You had to see more. Very few people made it to the end of the tweet and understood what he was trying to say. It was a little bit too vague. Like he was talking about, it's throwaway prototypes. You don't even look at the code. You just ride the vibes and see what happens. Right. And a lot of people interpret that as, oh, it's using AI to write code for you, which I think is a bad definition because then it becomes useless. Like a couple of years, all code will be written with some level of AI assistance. I think having a distinction where you say, no, vibe code it is, didn't review it, just sort of threw it in there and saw what happened. That's kind of useful now. Is it still useful in a couple of years even then, right? Yeah, and I think that that will be – I think that the term vibe coding will be sullied enough that you will use a different term to describe something that I like. Oh, like I used this to create a prototype. Whatever that kind of rapid prototyping is, it will have a different – it's like, no, of course it didn't vibe code it. No, please. That's so 2025. Brian, I'm going to put this on record just because when we listen back to this in a year – and you're right. This is going to feel juicier, but I think you're out of your mind. I just want to put that on record. I think it is such a tantalizing, attractive term. And that's why, Simon, I was reading a book with the title Vibe Coding. I don't know what's wrong with me in terms of my book selections. Four for four, baby. I read it. I read it, actually. And Gene Kim. And Simon, I stumbled onto your blog post where you're like, look, there are three authors and two publishers. all of whom apparently don't know what the term means. So I think it is such a juicy term that people want to co-opt it. That blog entry caused one of the books to rename itself. There were two Vibe Coding books. One of them renamed itself to Beyond Vibe Coding. Oh, did it, Simon? Oh, how interesting. Isn't that interesting, Adam, that they renamed it for Vibe Coding? Interesting. That's now Beyond Vibe Coding. See, that's what I'm saying. It's going to be Beyond Vibe Coding. It's going to be something else. I think the term Vibe Coding is going to be solid. I, I, I, I, first of all, thank you for saying that out of my mind. I definitely appreciate that. The, um, because I may well be, um, but I, I mean, you're right in that. Like, it feels like because it can be anything, it's just too tantalizing to not use, but I think it's going to get a bad name for itself. So when you're right, we know like that, like how, how much I disagreed and how right you were. I, I went back and forth on this because I sort of had the same thought at first when Brian said this, but then I think I agreed with him more as he went along. The thing is that vibe coding is too good of a term for both the haters and the people who like it. It's just too attractive I think, just as a concept. And so I feel like it's already sullied to many people, but people are still using it because it's also just such a good term. Even though it also sucks and the definition is bad and people can't even agree on what they define what they use to mean. I did try, I have been trying out the idea that I vibed this up. Like I didn't vibe code it, I vibed it. And you vibed it. I vibed it, and my wife is like, no. No, okay. You know what, let me just say on the record that if we refer to things as I vibed it, I'm taking zero credit for that, Adam. So vibe coding is out of the lexicon because we have replaced it with something that's even cringier than, yes, I'll take zero credit for that. But I do think that it will be, so time shall tell. my one year here is like very similar in the sense of i think one of the things i like about doing this is that you go back and you see what especially in one years i think it's like what was i thinking about at the time right like i haven't thought about congestion pricing in like six months basically and then now i'm like oh yeah i was really interested in that a year ago and so i decided to pick the thing that i'm really kind of intrigued about right this second and maybe i won't even care about two months from now which is agent orchestration will still be a hot topic and it'll be partially but not entirely solved uh and we're gonna need to get to we're gonna peg you down to a more specific prediction that's that one's a little too easy to claim credit on so you're gonna have to like get something uh give us something concrete all right uh some well let's see the problem is in the quantity like i think that some people will have success with this technique but not enough people like it's kind of like a it's still a thing that people are going to be pursuing but it's not going to be a thing that like is as normal as agents have gotten in the past year i don't think figure out how to make them work together is going to be a thing that is going to be as clearly a win okay so how are you going to know if this prediction's right that's the problem with the quantification of what that means specifically um yeah yeah i'll think about it but that's like kind of where i think this is an interesting topic going as i've Like I personally top out at three to four Claude sessions and that's like it. And that's like an upper level on my velocity doing development. And that's why I think people are trying to like solve this problem. Because if you can scale up past that, then one person can have like much bigger impact. But it's also like a really hard thing. And people are doing totally insane things like Gastown from Yegi is like a fever dream of a thing that's like ridiculous. But I think people will still be interested in this topic and are working on it as a thing because it's how you scale up. Okay, but this will not be mainstream to have more agents than siblings? Maybe that's the way to put it. We're not going to have a Kubernetes for agents that's like as solidified as that, right? Where people are just like, okay, Kubernetes is just like the default, like Kleenex. I don't think we're going to have a framework or a tool that is ubiquitously the way that everybody organizes their agents. Okay. All right, that feels... Yeah, sorry to get you back down here a little bit. No, no, it's good. Yeah, listen, if we're not to the point where Adam is saying that you're out of your mind, we're just not at a good production. I mean, that's really what we're trying to do here. Adam, do you have other – I've got a couple more here. I have one more one year, but I feel like it might be too ambitious. I think this is the year we see LLMs have a programming language which is not human intelligible, that there is a programming language by and for LLMs. uh, okay. So that this is a, this is like at runes. This is, this is indecisible. Yeah. This is like not really intended for humans to understand, but it is more efficient for the LLMs to program it. Like there's, there, there's already some, uh, some papers and maybe Simon, you can fill in the details here, but, um, where LLMs reasoning, not in human languages, like, uh, like English or in deep six case in, in Chinese, but in sort of like their own tokenized languages are more efficient. So something like that. Yeah, that would be, you know, I already find it to be slightly off-putting and also like delightfully off-putting, you know, when these things show their work, especially because, and Adam, we talked about this in our DeepSeek episode with the Cerebris folks, where watching DeepSeek like kind of have like a nervous breakdown as it's trying to answer your question. And then it occasionally laps into Chinese, come back. But the Chinese thing, have you had your own laptop run a model that thinks in Chinese yet? Because that's beautiful. It's so cool when that happens. But Adam, you think this is going to happen for a non-natural language? It'll be a synthetic language that they will? That's right. A synthetic programming language that is easier for them to work in. I think the interesting thing about that one is that the labs are trying to stop that from happening just from the interpretability point of view. Like if you look at all of the interpretability research, the whole point of that is we really want to know what they're thinking because we don't want them going dark on us. Interpretability, safety, and so on. Yeah, yeah, yeah. Explainability. So maybe there will be a tension where this thing is trying to invent the synthetic language and it's constantly being apprehended by its frontier model overlords. Yeah, maybe I'm overly influenced by my reading list. Okay, so I think that the, one of my several one-year predictions, I think that AI has created some real public perception problems for itself. And I think that you are going to have one of the frontier model companies this year have a white paper explaining how the proliferation of AI will mean prosperity for everybody. So there will be trying to make some economic model, some economic argument, because I think that, and maybe this kind of dovetails my other picture, that this is going to be a 2026 election issue is going to be how we think of these things and how they are regulated. And it's a big mess. And there's more heat than light on this debate, I would say. I'd like to tag something onto that one. I think that only works if they can sort of, if they can wash that through existing trusted experts. Exactly. Dario, they're constantly publishing essays that try and make this gas. Nobody believes the word they say. Nobody believes it, that's right. Get Barack Obama's signature on one of these position papers. Yes. And maybe you've got something people might start to trust a little bit. Otherwise, it's just like leaded gas is good for you, says Exxon. That's right. No, right. So yeah, they get someone who, And whether that's, that person is kind of, I hope it's not, I mean, yeah, God, Obama, it would just be so, wait, wait, yeah, okay, let's go with that. That's a great one. Because, like, look, if it's like, if it's Bill Clinton, everyone's going to kind of roll their eyes. So it's got to be, it's got to be, so someone who's got real credibility saying that this is going to be broad-based. I will say also, if they get that person to do it, it's going to be revealed that that's also a bit crooked. How about the Pope? The Pope? Ooh. That Pope is very into the stuff. The Pope is very into the stuff. I God this is okay that's a great prediction we've hit pay dirt the Pope weighing in on LLMs and their economic impact and their economic impact in the world okay I Simon I'm giving you full credit if the Pope weighs in believing that this is going to be economic devastation I just think if the Pope weighs in on LLMs in a public way Simon you are a prophet I mean you're already a prophet in our eyes anyway but that he's already talked about he's already talked about LLMs the curve what does he say about all that i think he has yeah he said like you said like you need to make sure that when you're using tools that you like use them in a way that's a good for humanity and not bad or something something like it was like a very like not pro but not like super anti but it was like a little anti if i'm correctly i think that even i think the previous pope there was something relating to ai there was one of those catholic proclamations with a bunch of like footnotes and things years ago. We're talking about the Pope going like going big on LLMs one way or the other. This is more than just like, Hey, this is a bit of a safe bet actually. Yeah. I think it's good. I like it. I think it's, it's, it's definitely interesting. I also do think, and I have been debating whether to make this a one year or a three year, but I'm going to, I'm going to go ahead and Adam, if you thought I was out of my mind on, on, on my vibe coding prediction, maybe you're really going to say I'm, I'm out of my mind on this. I do. So I, like a lot of people, I've been having an increasingly intense.com boom flashbacks. And in particular, the thing that is killing me is like the kind of capitulation to the never ending boom. And that was the last stage of the.com boom was the capitulation, which happened, I would say in late 99, early 2000, where everyone's like, you know what? It is going to be, wow. I'm just going to like join the madness. And yes, I know it's madness, but because everyone did know it was madness when it corrected or corrected really quickly. I think that we are going to get in the first stage of that. And I think that the first stage of that this coming year is going to be some of these companies that I think are ultimately going to be a feature of the frontier models that are independent companies. And so I hate to pick on them because I don't want to, well, it is what it is. I guess actually you've already like thrown three different authors. You've thrown three different authors under the bus and I threw a fourth under the bus. So why am I, why do I care? Do you forget our, our, our goal for this year for getting a C and D? Like why are you not doing your part? I, I, I, it's never too early to get working on our, our one year OKRs of getting a C and D. Yes. Uh, okay, fine. Harvey, I'm going to call him out. So Harvey is this, uh, have you heard of Harvey, Adam? No. Oh my God. Okay. So Harvey is a variant of LLMs that is aimed at the legal profession, right? It's aimed to assist lawyers, maybe to be an automatic lawyer, unqueer, but it is designed to be LLMs for lawyers. It has an $8 billion valuation right now. They have raised an absolute mountain of capital. Unlike in the dot-com boom, with the dot-com boom, these companies were all public. so when they kind of fell apart, everyone knew they fell apart because they were public. I think that you're going to have some of these companies that are private who've raised a ton of money, they're going to kind of do a clubhouse where clubhouse raised a ton of money and then just kind of like quietly, I mean, I don't know what they trickled the down. I mean, you recall that clubhouse raised a huge amount of capital and I don't think we really talk about clubhouse very much anymore. I think that we're going to have this same effect on some of these companies. open evidence i'm i'm i'm i'm less convinced about open evidence is aimed at docs but harvey i think is just going to be emblem i think harvey is the pets.com of a coming ai correction where where harvey's going to bust out and everyone's going to be like no no we knew that one was crazy and so in now this is not going to be a full-on ai bust i don't think but i think in a year we will have some and they'll have a there'll be a different nomenclature and adam because this is one of those things, and I know you remember this, remember when we called it the correction and not the bust? There was this very brief period from April 2000 to November 2000, where we called it the correction, where pets.com had blown up and a bunch of these others had blown up, but not Sun, not Cisco, because, you know, we're the picks and shovels and all this other kind of like nonsense that we told one another. And then you realize like, oh, no, no, it's not a correction. It's a I think that we will have a different kind of name. You know, this will be the rationalization, the focusing, the sharpening, who knows. But it'll be called something that says that it was like, no, that Harvey was clearly insane. But these other companies are not insane. That's honest. Okay. When Harvey AI acquires MoFo, who wins? Like an AOL time writer. What a great. Oh, totally. Oh, my God. What a great parlay. that Harvey just starts flat out acquiring law firms, which is totally plausible, by the way. That is like, that is your AOL time warner is the Harvey Morrison Forrester or the Harvey Wilson Sincini. Like why pick at that kind of valuation? They could just buy them all. They could just buy all law firms. You know, maybe that's what they... Yeah, they are the law. Yes. They are the law. Um, so, uh, yeah, that is my, that is my one year friction. So I do, I do think that we are going to begin to get the, I think things have gotten, it's just gotten too, um, because the, the, the fear of any kind of bust seems to be gone. And that's the moment to really dance close to the door, as they say. Love it. So we got some big, big, big IPOs happening potentially this year. And I don't know, Adam, if you got any thoughts. So you've got SpaceX, OpenAI, Anthropic, all potentially going, trying to get out, trying to IPO. I think we are going to have one of those S1s is going to be disconcerting. And that it's going to show that the economic models of one of these companies is much more strained than people realized. So we get one S1, everyone vomits on it, and we don't see any more S1s. I don't know if we don't see any more. I don't know if we do or don't see any more. I don't know, but I think that you're going to have an S1 that is extremely, because I'm thinking of the WeWork S1 in particular. The WeWork S1 ended up having a real blast radius, if you remember that, where it was really revealed that like, oh, this is not a good business that WeWork is in. And we work with all sorts of shenanigans. And I think that we will see some kinds of shenanigans in one of these big S1s. It's my... I love it. That is my prediction. But I also... You know what? I'm just going to say it, even though this is a dumb prediction. I think that... So I think the SpaceX S1 damages either Tesla or XAI. Right. so i think that that the spacex s1 reveals something where you i mean in particular like to my three-year prediction of last year that the cyber trucks no longer made spacex is infamously buying like lots and lots and lots of cyber trucks and i i i hope to hell that this is somehow above the bar required to be in the s1 to reveal how many cyber trucks they've actually bought but that's the kind of thing i'm talking about just the like one hand washing the other of the elon enterprises. That's right. That's right. So that is my other one-year prediction. Good. We'll see. Oh, and then I've got one other. Sorry, I'm really dropping a lot of one-year predictions. I think that we're going to see a real problem with AI induced on Wii. Where where software engineers in particular get listless because the AI can do anything. Simon, what do you think about that? Definitely. I mean, yeah, anyone who's paying close attention to coding agents is feeling some of that already. Like there's an extent where you sort of get over it when you realize that you're still useful, even though your ability to memorize the syntax of program languages is completely irrelevant now. Yeah. Yeah, I don't know. I mean, something I see a lot of is there are people out there who are having existential crises and are very, very unhappy because they're like, I dedicated my career to learning this thing and now it just does it. What am I even for? And I will very happily try and convince those people that they are for a whole bunch of things and that none of that experience they've accumulated is gone to waste and so on. But yeah, no, it's psychologically, it's a difficult time for software engineers. And do you think that we have a name? Yeah, sorry, Steve, go ahead. We had a lobster situation where somebody was borderline suicidal because of being upset about the fact that their life skills was no longer going to matter anymore. And it became like a community problem because, yeah. So it's definitely happening for sure. Okay, so I'm going to predict that we name that. Whatever that is, we have a name for that kind of feeling and that kind of, whether you want to call it a blueness or a loss of purpose, and that we're kind of trying to address it collectively in a directed way. Okay. this is your big moment it's your big moment pick the name if you call your shot from here this is this is you pointing to the stands um you know like deep blue you know yeah deep blue deep blue i like that i like deep blue oh did you walk me into that you bastard You just blew out the candles of my birthday cake. It wasn't my big moment at all. That was your big moment. No, Adam, that is very good. That is deep blue. It very good All of the chess players and the Go players went through this a decade ago And they have come out stronger Yeah It is deep blue Jesus Christ Adam You scare me sometimes, man. There's a reason that you bring me to this next. There's a reason. Let me just tell you. There's a reason. Like, hey, listen, sometimes it's, you know, this Web 3 is coming back. And by the way, I tell you this other book that I hate reading for the third time. But man, every once in a while, you really send it out of the park. Okay. I need to throw in a positive prediction. Yeah. but it's not an AI prediction. This is a one year. I think that cacopor parrots in New Zealand are going to have an outstanding breeding season. The reason I think this is that the Rimu trees are in fruit right now. The cacopor parrot, there's only 260 of them left. They only breed if the Rimu trees have a good fruiting. And the Rimu trees have been terrible since 2019. But this year, the Rimu trees are all blooming. There are researchers who think that all 87 females of breeding age might lay an egg. and for an ex-species with only 250 remaining parrots these are great parrots okay so you know i love this because i think and i'm gonna like i'm gonna elaborate on this and be like this is something humanity wants like this becomes something that people like it's like the the condors on silicon valley the the like everyone wants this is a feel-good story during a difficult age it's it's it's the perfect it's the only positive news i've heard it's so good because If you've never heard of a Cacapaw, go and look them up. Yeah. Big, dumpy, green, flightless parrots. They're super charismatic. We need more Cacapaw. This is like the miracle on ice in 1980. This is the thing that in a difficult time, this is what gives people hope that positive things can happen. Yeah. I love it. That's great. That is a very positive prediction. And I want to go, yeah, I need like some webcams set up so we can like watch the eggs hatch and everything else. Those exist. Yes, the Capricorn teams have very good online presence. That is awesome. And you should know that there's someone in the chat saying, hey, I'm in New Zealand. This guy's right. So it's like the Kiwis know that. It's like, you know, finally, they finally have a guest on this podcast that really gets it. So that's a good one. I hope someone just got bingo. That's right. All right. Are we on to three years? Have I? I've exhausted. Yeah, three years. Let's do some three years. Why don't you start, Brian? You bring a big bag of predictions. Okay. So I think that in three years will be a – I think it's not going to happen. I just don't think it's going to happen the next year. But I think it is going to happen. A massive pivot away – a delineation between AGI and ASI and realizing that – look, the whole idea of AGI is – politically it is a dead letter. It is not something that is for a democracy. And Simon, you said this last year about not wanting to live in a world where people didn't have work, right? People don't want to live in a world where there's not work. They really don't. Work is very important to people's sense of meaning. And any kind of claim that like we've built this kind of super intelligence and nobody needs to work again, I think is going to be really resisted. And I think it's also helpful that it's not, my personal opinion, not true. And so I think you're going to get a lot of the AGI is going to be the thing that we already have. And, oh, no, ASI is the thing you're worried about. Well, no, no, we're not doing ASI. Who told you that? No, no, no, no, no. Our mission is to build AGI. Good news, we already did that with chat GPT 5.2 already was AGI. So I think that's going to be in the next three years. They're going to stop talking about AGI as this kind of thing in the future that I can talk about something that's already done. But super intelligence is going to go away as an aspiration. Simon, what do you think? I love this prediction. The one thing that worries me is it's valuations, right? The AI companies with the giant valuations, the only way you justify those valuations is if it represents the total addressable market is all human labor. And what are they going to do? How do they dial their expectations back and not sort of invert the reason for their company existing? Well, I think that this is going to be part of the AI bust. So I think in three years, we will see. And again, I mean, there's no doubt that the frontier models have tremendous value. There's tremendous value here. There's no doubt about that. But I think we will have boiled off a lot. And I think that we'll be really looking at these things as tools in three years. That would be wonderful, wouldn't it? It would be wonderful. This is your utopian prediction. This is my utopian prediction. This is that like, look, the parrots have, the, you know, the kakapu parrots have their extraordinary breeding season and that like humans have jobs. That's like those sort of those that are the two feel-good stories. In fact, there's so many parrots that people have to just like domesticate them suddenly. That's right. New jobs. So that is among my three-year predictions. Simon, what are your three years? I've got one that's semi-related. We will find out if the Jevons paradox saves our careers or not. Oh, there you go. Yeah, yeah. This is a big question that anyone who's a software engineer has right now is we are driving the cost of actually producing working code down to a fraction of what it used to cost. Does that mean that our careers are completely devalued and we all have to learn to live on a tenth of our incomes? Or does it mean that the demand for software, for custom software, goes up by a factor of 10. And now our skills are even more valuable because you can hire me and I can build you 10 times the stuff where I used to be able to, so I'm more valuable to you. And I think by three years, we will know for sure which way that one went. Yeah, and so to give people contact about the Jevons Paradox, the Jevons Paradox is a 19th century due to a Scottish economist and who observed that as coal was becoming cheaper, more of it was being used. and that was a paradox. Why are we, and the reason we were using so much more of it is because we were finding new uses for it. And the question is, the Jevons paradox for software engineering would be, as this becomes much cheaper, do we do much more of it? So we're not putting people out of work because there's actually much more of it to do. And the thing that is interesting about Jevons is that Jevons was, that paper is called the Cole Problem because Jevons was not incorrectly, very worried about running out of coal. And what did not foresee at all was, of course, the discovery of petroleum and solving the coal problem in a completely different way. So it'd be interesting to know if we end up, but yeah, so you think in three years, we're going to know that. I think we will know for certain. We'll be like, okay, this is how it played out. Yes. Yeah. One thing I love about the Jevons paradox is that, Brian, you're the first person I've ever heard cite it. And then in the years since I've heard you cite it, it's been cited increasingly more often. I feel like I see people reference the Jevons paradox once every three months now when I never heard of it five years ago. Yeah. Steve, bless you for saying that. Whether Adam is putting you up to it or not, be like, watch him chomp down on this. He won't question this at all. He's like, this guy loves the sycopency of these LLMs. You just give him a, you know, I feel like I, I referred to the Jevons paradox actually in a keynote like, like nine years ago. And I'm like, I feel, but I must have, I mean, obviously it's like, I mean, it's, it's from the 19th century. So it's like, I clearly can't claim that much credit for it. So anyway, but thank you for saying that. Simon also did his three year was like what I was trying to get at, but I couldn't figure out how to say it. And I said something that was much worse. so mine ended up being like using ai tools and writing software professionally is going to be considered something closer to autocomplete or syntax highlighting than something controversial or exceptional and i was trying to like i originally had something in there about the industry is going to figure out our existential crisis around these tools and it's just going to be like one way or the other but i couldn't like figure out how to put it so like i was second i think it was very well said simon um yeah well and so i think that that we and simon i think it's it's a very good observation i do think and dovetails into another three-year prediction that I've got, which is that we see much more custom-built software and much less SaaS. So you get a lot of LLM-generated or assisted software that's running effectively custom software. So you're developing software to put in production for yourself, and you kind of care less about the stuff that's like, well, you know, yes, there may be things that you would care about if you made this available as a service to the internet, which I actually don't care because I actually am I, and because one of the things that, I mean, when you, when people consume software as a service, especially like the more niche it gets, the more important it becomes to your business. And then the easier it is to have a real disconnect with your software provider. And I mean, Steve, at Oxide, you were very much on the front lines of us replacing SaaS software with software that steve wrote that was i mean steve you're lm assisted right you're making i started to write and then eventually claude wrote all of it like it was very much like i started this before i even thought ai tools were good and then by the end uh claude was doing a lot of work uh i think what right before i left we looked at it and like my personal ai usage was like the same as the rest of the company at the time or something like that i think the bill or whatever uh and i It seems like you all have used it even more since I've left. But yeah, absolutely. I think this is definitely a huge thing. I have several personal projects that are effectively just replacing SaaS tools with things that are bespoke for people. And it's great, honestly. Because you're going to get like, hey, my SaaS vendor, they're charging me too much money. Or you get like the case we had. Actually, I would gladly pay more money if you delivered us actually the software that we actually need. And in this case, it was for PLM, product-class cycle management. But you get these kind of esoteric – I mean, not esoteric is too strong. But these things that are very important to the way an organization operates, that your software provider just doesn't – they don't care about your software as much as you do. I mean, this is what – that old adage that no one cares about your money like you do. Nobody cares about your software like you do. And I think that the ability to build custom software, and I think, by the way, this is going to be a real source of, we're going to have a lot of young people that thought they were going to be working for Google and Meta and so on that are maybe not going to be. and they may instead be working in the kind of more mainstream economy, writing software, using LLMs to write software that's very relevant to, you know. This ties back to something I talked about earlier, the sandboxing thing. Basically, if you want your SaaS to stay relevant, you need to embrace plugins and extensions where your customers can customize it in all sorts of interesting ways. The way to do that is with a sandbox where they can write code that can safely interoperate within your platform and not delete everything but all of that kind of stuff this is the kind of thing which used to be really difficult to build like shopify built this a few years ago right the shopify functions but very few other companies have done it i think a lot of companies are going to start doing exactly that yeah interesting there are a ton of like industries that are normie industries where there is like 10 consultancies that make shitty software that professionals use because those are the only 10 companies that know their vertical. My girlfriend's a real estate agent and when I look at the tools and the SaaS tools that are useful for her, they're all garbage. And I've been using Claude to build her website instead and it's way cheaper to just pay the upstream MLS for the data feed and then just have your own thing done. And it's way nicer and way cheaper because, and I think there's just so many industries that have very similar kinds of things where there's like, the software that's made for professionals is just bad, actually. The most successful implementation of this pattern of all time is Salesforce, right? Salesforce, incredibly customizable. Dreamforce in San Francisco has 50,000 people attending, and they're all professional Salesforce customizers. So that pattern absolutely works. It's just, it's really hard to build, which is why a few companies other than Salesforce have built something with that pattern that's that successful. Yeah, interesting. And yes, maybe Salesforce ends up being the kind of the victim of that, of people being able to build this stuff easily on their own. Adam, do you have a three-year? He actually tacks into a similar theme. I was thinking, I think we're all thinking along the same lines in this three-year horizon. And I've been thinking about some of the observations we've made in the past about standing on the shoulders of giants, about how all of this software is enabled by all the software that came before it. And, you know, I remember when we looked back at, what was that Microsoft, the Showstopper book about the development of NT? of seeing that as really maybe one of the last isolated systems, like systems that are not kind of participating in this larger open source network effect kind of thing. But I realized that LLMs, like, they benefit from open source without necessarily needing to use it directly. They benefit from all of it being out there. So I struggled to figure out how to phrase that in terms of, like, this kind of concept of, like, everyone's going to build their own software you don't need to use open source software you can just build your own so i kind of set that aside but instead my prediction is that we get a crisis of ai slop open source so contributions projects that like crates.io is just inundated with this ai slop open source uh library and it becomes indecipherable and so does it does this uh how How does this affect open source in the large? Does this make open source less tenable? I mean, is there, did these two trends combine to make people want, like, is this a. Yeah, the parlay I had there that I hesitated to make is that it makes proprietary software more attractive because you have a brand behind it, a person behind it, a throat to choke, as it were, behind it. Where, you know, it's not, you have some providence associated with it. You have some quality associated with it. You know, it's not malware. And it helps sift through this AI slop onslaught. An organics movement, but for software. It's a certified human written code because, you know. Oh, yeah, absolutely. Like the non-GMO repo. Absolutely. Yeah, definitely. And so I think that you wonder, because clearly you need these foundational things, though, to be open source in order for this whole thing. Python has to be open source for this whole thing to work. You need to have these kind of foundational things that are open source. But it's maybe these – or do you think that even those things – do we see a return to proprietary programming languages? Although I guess actually we're using the runes that the LLM has invented for themselves. That's right. That's right. It's a good question about programming languages. But I do think you see the value of proprietary software, or perhaps just paid software, maybe still open but licensed, is getting provenance and the ancillary benefits that often come with paying for something. Interesting. I've got a new three-year one. Yeah, I am. I think somebody will have built a full web browser, mostly using AI assistance, and it won't even be surprising. Oh, interesting. So that's a big, complicated system. Yes. So we will have... Notoriously complicated. Rolling a new web browser is one of the most complicated software projects I can imagine. Yeah. And specifically, the reason I think that's going to work is it turns out one of the most effective ways of using a coding agent is to give it an existing test suite and tell it, write code that passes these tests. And in the past three weeks, I've done that for an HTML5 parser library. I span up a brand new implementation of HTML5 parser that passed the 9,200 HTML5 conformance tests. And I did it for a JavaScript interpreter. Like I've written a noddy little Python JavaScript interpreter that passes the micro QuickJS test suite. And it wasn't very hard because once it's got a test suite, it just keeps on plugging wed into all the tests pass. I think the browser specs are nearly at a point where a lot of these things, there are conformance suites, right? There's the CSS conformance suites. There's all of this stuff. honestly, today, you could start one of these coding agents working on this problem, and it would make a surprisingly decent amount of progress. Three years' time, I think it's going to be easy. I think they'll be able to do it. Yeah, that's, and that, I mean, that would be interesting, right? If you can build a system that is that sophisticated. But it's the cheat, the cheat code is the conformance suite. If there are existing tests that you can point it at, it'll get so much easier. Yeah, but that does allow you, I mean, that gets you out from underneath some of the homogeneity that we've got at various levels of the system. One of the questions we definitely have is what, and Simon, you and I are going back and forth on this, about whether we're going to have, is Cloud Code going to be writing kernel drivers, where the loop is more complicated there. You don't have some of those things that you're talking about in the browser, you don't necessarily have for something like a device driver. Well, I don't know. With the device driver, it either works or it doesn't. Oh, no. There you go. This is my naivety with hardware. I know. If you can reduce the problem to a thing where the coding agent itself can tell if it got it right, it's easy. It's easy. If you can't, it's not easy. Yeah, and with a device driver, you can't, unfortunately. It is really, really hard. Because then you have all sorts. I mean, it's not just the edge conditions. You've got performance. It's complicated, I think. But I think for those things that you can get that kind of reliability. And I think I said this as much my one year, but just to be clear, when Adam said I was out of my mind about Vibe coding going out of the Lexicon, but I think that certainly in my three-year, we are going to be using LLMs to be more rigorous about the way we do software and sharing. Oh, yeah. That's a one year. Yeah, yeah, yeah. And I think that that's going to be a big blip in general, where it's like, no, no, no, this is not coming to replace your job. This is coming to help you do your job better. Right. The thing today with LLM's automated tests, no longer optional. Continuous integration, no longer optional. Good documentation that's actually up to date with code, no longer optional. Those things like in the past, we've been able to excuse, oh, we don't have a good test suite yet because we didn't have time. That doesn't work anymore. You've got time now. Run Claude code overnight and you'll wake up to a test suite and it'll be a bit shit, but it's better than zero. Yeah, right. But yeah, it is just amazing this new world we live in. I've been wondering lately if one thing that has a really good test suite is the Rust compiler. And I've been working on a little programming language for the last two weeks, and I've gotten way farther than I ever expected to, partially because I went spec first. And that's how this sort of dovetails into that. But I've been thinking about, should it have just been a Rust compiler instead of my own little language? Because there are so many tests for the Rust compiler. It's like they've done a very great job with that. And I'm really curious if that's something that's similar to, like, I'm going to build this HTML5 thing, I'm going to build a JavaScript implementation. Like, is someone going to make a Rust-C? So here's a fun one. I think it's now easier than ever to introduce a new protocol into the world if you ship a conformance suite. Like, release a conformance suite and boom, overnight you'll have libraries in half a dozen languages because the conformance suite is the majority of the work. Yeah, interesting. and then you also make it when you do that you make it much more readily adoptable by other LLMs you make it like it's like it overcomes the problem that it's not in the training data and people are kind of nervous that you could never launch a new program language now because it's not in the training data but the context lengths are big enough now that if you can get it into a test suite and fit the examples on how to use it in 10,000 tokens it doesn't matter that it's not in the training data yeah Ian we got you up here I don't know if you have any one year or three years, but you've got such a great track record that we look to you as our Nostradamus. Maybe you just strongly agree with me that Vive coding is going out of Lexicon. I'll take that laugh. Adam, that laughter is noted. That's derisive laughter. I feel like the only way that Vibe coding leaves the lexicon is if the older generation makes the term uncool so the younger generation comes up with a new term that is cooler than Vibe coding. What he's saying is you have a big lever. I've done this before. I know how to, it's like, come on kids, isn't that hella cringe? It's like dad, dad, dad, please stop. up um so yes just watch me vibe this up i'm vibing right now that's right i'm just like you guys i'm just vibing this up okay we need another term we need another term for this guy don't kill my vibe that's right um so i do have a few predictions uh on the one year um i have demand outstripped supply for Waymo rides from San Francisco airport. And the way that I measure that will be wait times greater than 10 minutes. Yeah. Interesting. That's a, that's a great get. That's a great prediction because Simon, you said this a couple of years ago that the absolute cheapest tourist attraction in San Francisco is a Waymo. Oh yeah. So like, 10 bucks, you get to go in a self-driving car. It's the best. Right. It's like, why wouldn't I wait 10 minutes for a Waymo? It's like, I'm going to wait for 10 minutes for the Pirates of the Caribbean. Why would I not wait? Interestingly, I don't think it's worn off. For me, it hasn't worn off. I've been riding Waymo for a year and a half. I still get that little frisson of glee when I get in a Waymo and it sets off on its own. Yeah, well, and I actually saw, I was in the, apparently it's pretty tight, like, cordon in the mission where the Zoox are riding around. And yeah, and so I... You've got to ask them to, yeah. Yeah, and I was trying to get on the Zoox, you know, on the Zoox wait list. That's what it is. It's enticing you. I want to actually get in that. So Ian, great prediction. Is that a one-year prediction, Ian? Or what's the... Yeah, that's a one-year prediction because they should be launching Rise from SFO for the general public this year. I have a second one-year prediction. So Friend, as in friend.com, I think they will have under 10,000 activated devices devices at the end of the year. Well under 10,000, but that's probably a conservative prediction. When activated devices, someone has bought the thing and has actually sent at least one message to it. What is friend.com? Oh, my. Oh, wow. Okay. Yeah. What? Go on. Brian has not been to New York City this year. Yeah. Is that right? Oh, before you, before you explain it to me, Adam, I noticed you've been a little bit quiet. Do you, I think Adam, Adam also does not know what friend.com is. And he's, he is relieved that I, that I'm hooking myself. I've been friend.coming for forever. I love, I love friend.com. And I use it the way that one conventionally uses it. Just like normal, just normal friend.com. Just like normal, like all the way you other folks use it. Anyway, go on. I'll let them explain how we all use it together. Tell Father Time here how you use it. Tell Funny McDuddy-Duddy how we actually, how all the rest of us use this. Yeah, I was kidding. Well, this is great. We have a yes, yes, no, no on this one. We do have a yes, yes, no, no. Yeah. So tell me about Friend.com. Yeah so Friend had a large Subway ad presence this year in New York City but also in Chicago And I think they did a campaign in LA The New York City ad campaign was not well received. Many of the advertisements were defaced by the New York City public. To the degree that there was a picture on, I saw of someone went as the friend.com advertisement for Halloween. So they printed up a sweater of the friend.com advertisement and handed out Sharpies so people could deface their Halloween costumes similar to the ads in the subway. Hey, you know what? I got to hand it to you, New York. This is a very Bay Area thing you all are doing out there. You know, that's great. That is really terrific. Okay, so What is it? It is an AI companion. It is a $129 pendant that has a microphone in it that connects to your phone, and it uses the microphone that could have just been the microphone in your phone but isn't for some reason, to send messages to an AI companion which can respond to you by sending you, I think it talks through the phone to you. So it is kind of AI chatbot psychosis as a service or something. Right. Jewelry. This is in the vein of the Rabbit R2 or the Humane pin. And this is yet another AI wearable. That sounds like it's, you say destined for, I'm really sorry that I've not, I didn't get a chance to enjoy this whole ride. But thank you, Ian, for... So you say less than 10,000 devices. That's a three-year prediction, okay? That's a one-year prediction. But yeah, I mean, they're not going to get to 10... The three-year would be that I'm pretty sure this company is going to flame out. But the one year is that this ad campaign does not really move the needle for them as a company. Oh my God, that ad just... And that's the kind of thing where it's like... I know because I'm basically like a rule abider. And when I am tempted to deface things, It's like when I'm tempted to like run over that, that the, the, the security bots, those little tones that whopper that, that, that Samsung had that would run around and beep at you. I'm like, you know, I think that I want to throw you into the ditch means that you are, I mean, it's like, this is, this is bad news for you. Well, this is Brian. Why I think you claiming that this is like the, what's something, this would never happen in the Bay area. Bay area people are rule followers to a much greater degree. This is a New York phenomenon. Oh yeah. Yeah. No, no. I, I, I love the, I love the rebellion here. And then, Ian, do you have some three years here? Yeah. So for the three year, I was thinking about the Windows 10 end of life and the claims of the year of the Linux desktop. and my three-year prediction is kind of an anti on that where the prediction is windows is still above 90 on the steam hardware survey as of december 2028 okay uh and that that's a good one um or or a grim one i'm not sure i think that's are you counting that as utopian or dystopian i I think it's... I think that it's... Well, here's the thing. I think that Linux has gone from less than 1% to over 3% on the Steam hardware survey in the previous six years, driven largely in part by Steam first-party hardware, so the Steam Deck in particular, but also just Linux usage in general has gone up. I think the Linux usage is going to go up in the next three years but I still think that Windows is going to remain pretty dominant within that hardware survey so that means that they may go from 95 to like 92 or something and Linux is going to grow up to about 5% but I suspect that the people who think that people are going to go out and replace their Windows 10 devices with a Linux machine or install Linux on their existing device to avoid buying a new device. Kind of a little optimistic about how much work people want to put into their computing. I mean, can you imagine going back in a time machine and being like, oh, there's a year at the Linux desktop? Pal, we're going to have computers writing software in production before we have... Sorry, we are... This is, although I have tried to use ChatGPT and LLMs more generally on Linux audio problems. What's interesting is that it's actually not that helpful. That it's the, I mean, they tell you the things that, you know, I never, it's Linux. Linux audio is still undefeated is what I'd like to say. Part of the real struggle here is the kernel level anti-cheat, which is like basically necessary for some genres of game that will just never happen with Linux. And so that's like, I don't know. Some of this is about like the relative market size of those markets versus other ones. But like there's some gate, like I will never not use Windows because all the games I want to play effectively require kernel level anti cheat to run. And so it just, they're not going to ever work on Linux. Adam, you know, this podcast is really, really arrived because my 13 year old daughter is texting me predictions that she has during the episode. Wow. Wow. That changes our whole demographic in so many ways. This Apple didn't fall far from the tree. She thinks this is going to be a major scandal involving Apple in the next three years. So don't ask any follow-up questions. She also said that she thought that the open AI guy was going to go to jail, she told me. And I'm like, Sam Altman? She's like, I don't know who that is. I'm like, that's the open AI guy. That's who you think is going to go. Sure. Sam Altman, if you're listening to this, please send us a cease and desist. because we have that as a goal for the show. Okay, let's go on to six years. Are we ready for some six years here? Yeah. Simon, what do you got for us? I've just got the one. I think the act of, the job of being paid money to type code into a computer will go the same way as punching punch cards. Okay. I think in six years' time, I do not think anyone will be paid to just do the thing where you type the code. Just type the code. Yes. I think software engineering will still be an enormous career. I just think the software engineers won't be spending multiple hours of their day in a text editor typing out syntax. It will look like punching cards. I think so, yeah. Yeah, interesting. In six years. But software engineering still very much exists. I believe so. I hope so. I very much hope so. Because I think the challenge of being a software engineer is not remembering what for loops look like. It is understanding what computers can do and how to turn fuzzy human requirements into actual working software. And that's what we're for. And I think we'll still be doing that, just a lot more of it in a lot more ambitious scale. And then, okay, does the software engineer, though, deals with code? I mean, the code is being written. I think they probably look at it occasionally. Okay, only occasionally. A little bit. So I met. Who debugs it? I hate to say it, but the agents debug it themselves. Okay, who who debugs your device driver that either works or doesn't? Like working on this programming language, I'm doing my own code gen and like Cloud is happy to pull out GDB and just like debug the programs that it generates and why the like the binary is wrong and then backfill that into why the compiler is wrong. Like it's better than I am, frankly. This is maybe so more about me than anything else, but like it's a thing that can do now. I mean, this is a really interesting thing I've been seeing just in the past three months around coding agents is that four months ago, I was absolutely on team you cannot commit a line of code that you've not read, reviewed, and understood that these things have written for you. That's just irresponsible to do that. I'm edging away from that a little bit because it turns out the art of using this effectively is get them to prove to you that the thing they've written has worked. The same way is like when you're working in a company, you don't review every line of code that another team has written, your team depends on, but you do talk to that team and you make sure that they are making a convincing case to you that the code works well and they've tested and they've covered the bases and so forth it's a similar kind of thing and it's so uncomfortable like it is i i it is beginning to give me the early onset of what they what they call deep blue yes um so but you cheered me up at the end there that there's uh that there is there's still a role for software engineers um um i adam do you have a do you have a six years? You have a couple. Dovetailing on your daughter's prediction, I predict that the cell phone business is drying up because people are keeping their devices longer. So Apple has several new attempts for what the next flagship thing is going to be. Oh man, that's a good prediction. That's interesting. I have like almost the opposite prediction written down here. I had Phones remain the most popular form factor for personal computers in terms of units sold in the trailing 12 months. But I do think this longevity thing is a real, real, real issue. I mean, you've already begun to see this where people are like, why am I getting the latest iPhone again? Like the camera's already awesome. And actually, I care more about battery life. I care about like, is it waterproof? I mean, I care about other things. that... So, Adam, how does this... I guess this does this happen after the major scandal in the next three years? Terrific. It must be on the heels of that scandal, yes. Or maybe the scandal, maybe this is somehow wrapped up in the scandal. Maybe the scandal is that they're scandalously entering a new business or what have you. No, I think that it's got Apple, but Apple's got a ton of capital, so they could go... They could do a bunch more Apple Vision Pros. Yeah. Well, they, yeah. So Ian, do you feel that, because you say this is on devices sold, is still good. So you think that the phones are going to still find ways to differentiate or. I just, I just think that there's, that I kind of have the opposite view in that. I think that, I think the phone sales may not go up, but they're still just going to dominate in terms of units sold. and there's no other form factor that has emerged that is more popular as a personal computing device. Yeah, I don't think those are incompatible, Ian. I think phones going down, it still could be the most popular form factor and folks could be desperately, Apple in particular, desperately trying to figure out what the next thing is going to be. Okay, could I tag a prediction onto that, which is that if phones are not the most popular form factor, I think it's going to be the neural link device of some sort. Oh, here we go. Neural link in six years. Is this how? No, I don't think it's going to happen. Okay. But if phones. Okay, if phones, if not phones. If not phones, it has to be that. Notice. Because all of the other form factors, the little bracelets and things you talk to, that's all garbage. Nobody wants to talk out loud to their computer. Right, right. But if you can think to your computer in public, that's the thing that could knock the phone off its pedestal. And it will be the leadership of the Pope, of the papacy, that tells us that we get the leading the way with the neural implant. Okay, interesting. There's a curse prediction that's a mixture of all of these, which is, of course, Apple acquires friend.com. And it's less than 10,000 devices. I have a second device prediction for six years, which was I predict that more Macs are sold in the trailing 12 months than any smart glasses or AI companion devices. This in the trailing of six years. So in five years, you've got more Macs than anything else. Yeah, so it's like when the six years is up, we look back in the previous 12 months. It's like, hey, it's all laptops. It's laptops and phones. It's the same. Yeah, I'm saying that the laptops, well, specifically Macs, so it's not actually the laptops, it's the Mac line, because I think that's the only thing that you can get a number of units on, roughly. But I think that more of those are going to get sold than any smart glasses or AI companion devices. And I'm saying Macs specifically. Like, I think that, you know, laptops is definitely going to be, is bigger than Macs. I'm saying that these smart glasses and AI companion devices are just not a real volume seller at all. Yeah, I totally agree. To any real degree. Yeah, I totally agree with that. So I am going to say that the DSM adds LLMs as a contributing factor to psychosis. The same way the DSM treats LLMs the way it treats kind of like cocaine. where you can have... I've thrived a lot in the early days of the profession and then looked back as a mistake of having done it. Well, no, because I think we are... You said the lobster's issue earlier. I think that we are going to have an increasing number of incidents of LLMs resulting in psychotic behavior. Okay. Has the DSM got anything about social media in right now? So right now they do have the on like internet gaming, for example, and they, but I think this is going to be more, this is going to be faster than internet gaming. Because I think that gaming is looking more at social isolation and some kind of modicum of dependency versus like, no, no, you – like the LLM got you to do something that you would not have otherwise done. That you had this delusion that your mother was involved in a global conspiracy and you burned down your house. You're betting against the AI labs being able to tamp this stuff under control. which I think is a bad bet. I think it's more that I'm like, I'm just betting on crazy in that. I think that there's no amount of safety that you can put in place that allows these things to be used and not, I don't know that they will be liable. I think it's going to be more like for diagnosticians to be aware of like, hey, if you're talking to a patient, like, do they have this kind of idea because of the LLM? Have they been having conversations with their LLM about this? I mean, it feels like we need this today. Oh, no, I think we do. I think that the reason I was saying earlier at the top that I was struggling with six-year predictions, the DSM moves slowly. So that's why this is a six-year prediction and not a one-year prediction. And this is well beyond deep blue at this point. This is well beyond deep blue. That's exactly – well, no, because this is not like a feeling of ennui. I got it. It's delusion. It's a delusion. It's a psychosis thing. And again, we have already seen this. we, and I think we will, and it's an accelerant. It's like, it's like substance abuse. You've got people that can have a, that can use substances without actually developing this kind of psychosis and then others that, that develop a real psychosis around it. And I think that we'll see the DSM become aware of that. So I think, I think you will also have, you will have, I think actually in three years, but certainly in six, you're going to have people trying to use as a legal defense. The LLM made me do it. I'm not blaming the actual frontier model. I'm actually it's the the llm that that did gin me up and talked me into doing this um this doing this this illegal act um whatever it might be are they also going to use it for the stock buyers are they going to be like the llm told me to buy the stock i didn't use any insider information to be able to trade on absolutely absolutely this is you know the the kitty did it is what the we went with the with the you know when you got the toddlers the everyone blaming the llps i know Absolutely. The LLM told me to buy the stock. Oh, actually, I forgot one of my three years. I do think ads are going to enter LLMs. And I think it's going to be an issue. Like product placement? Be like, what would go great with this recipe is a Coca-Cola. I think product placement and where you are actually either putting your thumb on the scale of what of the output or getting more of the input. So the because I mean, you think about like the view that these chatbots have on the kinds of questions that we're asking. And boy, if you were developing, you know, if you were in marketing or you're developing a product, wouldn't you love to know what people are searching? And it feels like it's like that's something you would pay for. and it's something that, you know, it's like, I think these guys will sell it to you in post the AI bust that I'm predicting roughly in three years. So, you know, all my predictions try to hang together. ChatGPT knows when you're pregnant because you talk. Yes, absolutely. Absolutely. The old adage of like, I think it was like Target, right? That famously knew. Famously, apparently that wasn't real. The thing that Target guessed someone was pregnant from their purchasing habits, apparently that doesn't hold up. That makes, yes, that's a relief because that didn't make, that kind of didn't past the smell test at the time. So, um, and then, are you saying that like the chat GPT equivalents are going to integrate as, as a first party, or are you taking a like SEO black hat view of like people are going to work out how to, uh, get their data into the training data such that when someone asks what the best laundry detergent is, then the model will spit back, oh, it's definitely tied, and you should not use any other brand. I was not predicting the latter, but I think the latter is a great prediction. So I strongly concur with the latter, but I think there's going to be a need, like there are going to be other kind of commercial vectors here, some of which ultimately it's going to be ads at some level. It's going to be getting you to buy product. um the adam did you have other other six years yes i think you're gonna like this one too even though it sounds insane as i read it uh i think tesla is going to be out of the consumer car business i think they're going to be selling batteries i think they're going to be selling fleets but i think that they are not going to be selling to individuals and their numbers are like down year over year for the last two or three years uh and i think that's going to continue Do they sell whatever the plural of Optimus is? Is that Optimi? What is the plural of Optimus? The Tesla bots. Does that ever come to fruition? Is that what they sell? Oh, sure. Yes. It's bots. Yes, it's theirfriend.com. It's theirfriend.com. Okay. Yeah. Yeah. I love this prediction, obviously. But batteries is already a big part of their business. And arguably, the cars are batteries. And fleets. And fleets. Okay. Okay. So they are out of the consumer car business. Yep. I do love that one. I'm going to add that NVIDIA's peak valuation in six years, we will see, was in 2025. So I think we are past NVIDIA. This is not stock. This is not investment advice, although this one is definitely... If you think it is investment advice and you act on it, if you could please send us a cease and desist, we'd appreciate it. Exactly. If you're listening to this, please put all your money into shorting NVIDIA. That's right. I think that, and this is not a slight on NVIDIA. I think that the valuation is, it's simply too high. And too much competition, too much, there are too many things. I mean, we talked about Gemini last year, and I mean, Gemini not trained on NVIDIA GPUs. I just think that there's just too much out there. that too many headwinds ultimately for them for for that valuation i think absolutely a going concern um and a well-executing business but this dovetails into one of my predictions too maybe justifies it but i say in six years jensen hands over the reins at nvidia and to a successor ceo maybe on the on the back of the the dwindling stock uh and is that ceo pat gelsinger no i think he's he's focused on his faith-based startup it's his faith-based uh lm startup yeah yeah um i mean he'll be like 68 or 69 jensen yeah that right yeah yeah i mean almost 70 year old man who has infinite wealth decides to retire does not seem sure okay bet against it that's That's fine. But look at like Morris Chang, who at age I don't even know is still going strong. Or Larry Ellison. Yeah. I was going to go Pierre Le Monde, but yeah, Larry Ellison, fine. Sully this podcast. All right. Steve, do you have any six years? My six year is boring, but it's funny because it shouldn't be boring. It is, which is AI will not have, will have not caused the total collapse of our economic and governmental systems like i you know that's a very optimistic prediction that's great yeah yeah i'm choosing to be optimistic here uh i think anyway i mean i i you know there's some ways in which that could be a pessimism and not an optimism but i'm gonna i'm gonna say i mean you didn't you didn't predict that economic collapse wouldn't happen you specifically said that llms are not going to be by ai yes correct yeah yeah uh i think we're going to figure it out and i think that a lot of the anxiety uh right now and worry about it is anxiety and worry and humanity is resilience and change is going to happen but we'll be okay it's gonna be fine and this is the affirmation tape that you listen to when you're beginning to suffer from deep blue this is the you steve klavnik reads this is you put your headset on as you're going to sleep. I had a very optimistic 2025. And so I think I'm going to try to continue that into the future. We'll see. That is great. Adam, do you have any other six years or are we going to end on the optimistic note? Let's end on the optimistic note. Translation, I do have another six year, but it's way too grim. Well, it's good. I think that I think a common theme from this year, I would say, is the LLMs really transitioning into a useful tool into the hands of practitioners. I think that and the demise of friend.com, I would say, are the two big themes. And the rise of the capital. Absolutely. I'm going to go check out the parrot. I'm going to go check out the parrot. If I learned that the parrots are vibe coding, I'm going to be very upset because that's going to run contrary to my one-year prediction. All right. Well, this has been great. Thank you all for joining us. If you do have predictions, and I'm actually going to, Mike Caffarella joined us last year. He could not join us this year, Adam, and sent me some of his predictions. So I'm going to drop those into the chat. So we've got those on the record. If you do have any predictions, get those in the record and we'll have PRs open as well. You can get PRs in there. But thank you all for your predictions. We've said before, predictions tell us much more about the present, we think, than about the future. But I don't know. Maybe this year is the exception. We're going to learn a lot more about the future. I do think Deep Blue has got... It's very good, Adam. I mean, it's really... If people have predictions, whether you're listening live right now or on YouTube or on the podcast, if you go to the show notes on GitHub, if you want to drop your predictions in, it'll give us an opportunity to review them in one, three, and six years. So feel free to submit a PR. Awesome. Thanks, everybody. And here's to a great and hopeful 2026. Let's go check out the parrots.