This Day in AI Podcast

2026 Existential Crisis, Claude Code Hype & Is SaaS Dead? EP99.30-WIZARDS

69 min
Jan 19, 20263 months ago
Listen to Episode
Summary

The hosts discuss the current state of AI development, examining the hype around Claude Code and agentic workflows versus the reality of practical implementation. They explore whether SaaS is truly dead, the economics of AI deployment in enterprises, and the evolution toward AI-first workspaces that could replace traditional software stacks.

Insights
  • Current AI models may be at their worst point in a long time, with each frontier model having significant weaknesses despite improved packaging and accessibility
  • The real value of AI tools comes from collaborative workflows rather than fully autonomous agents, as humans need to remain in the loop for quality control and context
  • Enterprise AI adoption will be limited by token costs until cheaper models can deliver comparable performance, making cost management a critical factor for mass deployment
  • AI workspaces are evolving into 'everything apps' that could eventually replace traditional SaaS tools by integrating email, calendar, files, and business functions in one AI-first interface
  • The threat to SaaS companies isn't immediate replacement but gradual displacement through better AI-native alternatives that offer superior integration and lower costs
Trends
Shift from chat-based AI to collaborative AI workspaces with integrated business toolsGrowing demand for model flexibility and cost control in enterprise AI deploymentsEvolution of agentic workflows from hype to practical delegation of specific tasksEmergence of paid, proprietary MCPs and AI-native data servicesConsolidation of business functions into AI-first platforms rather than separate SaaS toolsIncreased focus on AI training and workflow optimization in enterprisesDevelopment of SDK-based AI platform ecosystems similar to app storesGrowing importance of context building and tool integration over raw model capabilities
Quotes
"My personal opinion is the models are the worst they've been in a long time. Like, I really don't have, like a go to model at the moment where I'm like, this is just blowing my mind with how good it is."
Host
"There's no way you can have people sitting around running these agentic loops all day and do it for $200 a month. It's, it's ridiculous."
Host
"I think employers should be asking in job interviews, show me how you work with AI. Like I really do think that should be a key part of every job interview now."
Host
"The only way you can Vibe is if you can do it as John Carmack would say fearlessly. You need to be able to do it without worrying."
Host
"Actually, I'd rather get back to writing my book."
Geoffrey Hinton
Full Transcript
2 Speakers
Speaker A

So Chris, this year, you like the, the year thing, first episode back, obviously.

0:02

Speaker B

That'S the kind of quality content.

0:09

Speaker A

People are definitely tuning back in for that. Very excited to be back for the year ahead. But I think what's happening, if you've been following along on X or various places, there's sort of two camps right now. There's the hype boys being like, agents are finally here. Claude Code can do my Washington. Then there's the other side, which is the, what I'm calling the 2026 existential crisis where software developers are like, oh, it's over because they're listening to all the tweet boys and people in, in all white collar jobs are fearful again for some reason. And then other people are sort of saying, oh, SAS is completely dead. Pack it up boys, we're going home. So I thought for the first episode of the year, maybe we could unpack the vibes, unpack the feelings that people have had. I think in the northern hemisphere at least they've been locked up so much that maybe these existential crises have come from being, you know, low. What's the vitamin you get from the sun again? The D. The vitamin D. They've got low vitamin D. And maybe, maybe it's too much lack of D that's starting to sound bad that has caused the existential crisis and people thinking that Claude Code can do anything and they can volunteer Vibe code the world. So you've also seen this sort of existential crisis play out on X by doom scrolling yourself. How are you feeling?

0:10

Speaker B

I've been through it too, because I've been away for a couple of weeks and so when your only interaction with the AI world is through watching what other people are doing and a guy's like, I've got eight agents recreating the entire business ecosystem for my local area and they're all off working all day and I've recreated everything. It just leads to this sense of apathy. It's like, well, what's the point? You've just got regular Joe's out there, you know, becoming multi $10 million a month businesses in their spare time. So why should I even try? So I think for us it always comes back to, okay, I'm home now, I'll try it for myself and see if it can really do that or not.

1:38

Speaker A

Yeah, and I think that's the thing, like I, I've been following, following along with people, the, you know, the different setups and people being like, I've made 2, 000 skills for, for this Agent and all of that kind of commentary. And I think the reality is when you go and try these things in your practical day to day, that's when you, you sort of start to calm down. You're like, okay, this is where things are at. Like, I'm not behind, or you might still feel behind, but you at least get a sense of the reality of really where things are at. And I think for a lot of people the excitement over Claude code is that it can actually do things right on your local file system and help you create things. And so that's really appealing to people because they can, you know, build a website, build a game, maybe solve a bug and do these things that they otherwise mightn't have necessarily seen AI do before. Like when they're interacting with it and they've got to do the work, it's the first time they have that sense of the AI is doing the work. And I think that propels it forward a bit. But the funny thing is, every time I sit down with those various agents or applications, any significantly hard challenge in a big code base, I find them incredibly painful to deal with. And then on the other side, when I build like a vibe code, a project, I don't actually feel that the models are that different to be quite transparent than when we were doing create with code almost a year ago now. It doesn't feel like there's that much difference apart from it having access to the file system. And long term listeners of the show would recall back in the day I put GPT4, oh, I think in a loop to try and create a Klarna CRM because Klarna said they were replacing all their staff and they were building all their own internal software. And I put it in a loop and it actually got quite a long way, I would say almost as close to some of the like initial scaffolding of vibe code projects today. So I feel like in a way that the models have advanced like considerably. But it's also just now the industry itself is figuring out how to package these things up in such a way that people can see the benefit from them.

2:20

Speaker B

Yeah, I agree. I think it's a little bit like, you know, when those people like go to Africa and they're like, we built 1,000 homes in one week and they've just made all these houses. But you realize that they've got no structural integrity. And it's almost like one of those cartoon villages in Looney Tunes where it's just held up by a beam or something like that where Yeah, I can make something really impressive really, really quickly with the models, but when you actually get to, okay, how do I deploy it now? How do I scale it? How can I be sure that it's secure? How can I continue to add features with it without it turning into some garbled mess on the back end is where it gets really challenging. And I think that when I think about real business challenges that can actually, you know, move the needle for existing companies, most of their problems that they have are entrenched or ongoing things in massive existing systems. They're not. Okay, we need a brand new landing page or we need a brand new piece of SAS software or something like that. You can. I think the thing is, it's very easy to make the technology look impressive, but when it gets to the real stuff, it actually becomes more of a challenge to use it ongoing. And I'm not trying to diminish it at all. I think there is a lot of merit to the current batch of tools and the batch of approach of how the technology is being used. I just don't think it's as simplistic as it's being made out to be.

4:39

Speaker A

Yeah, And I think this is the challenge with social media, because there's so many people out there, especially around AI, trying to feed that homo or those feelings that you have putting out content, saying all these things, and when you dig into it, the examples of what they've actually done, you never actually get any meat. It's like, well, what, what did you actually ship? Or if you click through to the bio, it's like they're trying to sell something, like a course or something ridiculous. So I think it's sort of like we're being a lot of. This momentum shift feels like almost the companies to stay in the zeitgeist have, like, got these, like, bot armies or something. Like, feels very unnatural to me and very unsettling. And I think that when, yeah, when you are in that sort of doom scroller away from it for a while, and I think this is maybe people who aren't working with AI every day feel to some extent you feel like it's just this rapid pace of change that you just cannot simply keep up with. But then when I got back into it this year and started digging into stuff, you're like, okay, like, nothing's really changed over the break. Like nothing's really happened.

5:57

Speaker B

Yeah, well, my personal opinion is the models are the worst they've been in a long time. Like, I really don't have, like a go to model at the moment where I'm like, this is just blowing my mind with how good it is. I think all the frontier ones have their own weaknesses at the moment. And so I think what it is is more people being able to discover what other people had already been doing with the models just to better software accessibility. We've talked for a long time around the importance of context building in getting tasks done. And I actually think what say the Claude desktop app and the other things that everybody's talking about is actually brought that ability to build a really high quality context and possibly really good actions, you know, like writing files, reading files, all those sort of things to more people. So everyone is discovering things that we as this podcast group sort of already knew about, but more people have now discovered it now and then seen the possibilities in their own areas of expertise.

7:03

Speaker A

I think back to the early like code interpreter in ChatGPT, like the really like original ChatGPT when they had the code interpreter that could execute code in a sandbox. And it's always been capable of doing things like creating word documents or slides. And look, they were ugly and terrible and it wasn't that great. But it's still the same underlying principles that are running today. Like it's the exact same thing. They've just got a bit of lipstick on it now by, with, with skills and getting it just feeling better. It's like the vibes of it are better because they've tuned the models better to cope with, you know, the, the taste profile of people. Like it's just, you know, better example training for the models. And so I think now, like you say, it's just something that it's slowly becoming more accessible to more people and then the impacts will see from that. It's still very unclear what, what it will be. But I. There has been a trend on my mind, like there has been this thing where, you know, during the break I found it quite difficult to keep track of. You know, I've got a bunch of email accounts, a bunch of different calendars and I find it quite hard now where I spend most of my time in sim theory, just working in my tabs all day, every day. Like it's like my primary work console, right? And I started to think, well, you know, like, why? Like I've got to go into my email, look at an email, I can use the MCP to bring it in. But after a while, like the MCP paradigm, to be honest, and I think many people already know this, that have used them extensively, it's tiring like you're like check my email and find this email and do like you just don't really want to type anymore. I think there's that disconnect at some point. And so I think the theory for me at least like maybe six to 12 months ago, when 2025 was the year of agents, was this thing will just read the emails and service up stuff to me and say like, hey, you know, you should like reply to this or you should do this. But what I've found even better is I connected my email inboxes and my calendars into Sim Theory in the ui. So in the notification tray I can tab over now see all my emails. It also is able to store the file attachments if I wish as well. So then I can execute emails like they're a task in a tab and I can say, okay, take this email and these attachments and do whatever tasks that I need to now do for me in a tab in the background. And so that got me thinking that a lot of these day to day tasks like email files, calendars, like time management, task management, all of those things start to feel really natural in your AI workspace. Right. Like the, these are the like, these sort of office like functions are the ones in my opinion that once they're baked in these AI apps start to become the everything app in a lot of ways like the start and end point of the universe for workers. And I like to be quite honest, I think that's probably the likely next step in the next 12 months we'll see. As opposed to people running a ton like I think agency loops and for research and various tasks. Sure. But I just don't know day to day if that's having started to work with this stuff, if that's actually the way people will work versus the way a bunch of developers think they'll work.

8:02

Speaker B

Yeah, I think that's right. One of the appeals of agentic work is the idea of the sub agents. So when it needs to do specific tasks, it has its own set of prompts to do those tasks. So right now if you're working in a normal AI chat session, you get to the next step and you're like, okay, here's your set of instructions for this step. This is what I need you to do with the data now or the email now or the calendar now or whatever it is, then it does that step and then you say, okay, now here's what I need you to do. The agentic workflow essentially has pre built prompts for each of those steps. Whether they're dynamically created by the AI that does the planning or by you in the form of skills. So one way or another, it's getting these dedicated prompts and dedicated context for each step of the process. Therefore, Energentic Mode is able to go on for much longer than a regular chat session where you're just giving it that information at each step. So to me, that's the appeal. When you've got well known processes or you've got. You want to rely on the agent's intelligence and a bunch of skills to go off and get a large piece of work done. That's really cool. But I agree with you. There still is a huge area for. I actually still do want to work interactively because I've got things I need my mind involved in where I keep the process going. And I think the weakness we've seen is with the MCP paradigm in the sense that MCPs were really just APIs, like regular product APIs with just tool calls added on top of them and they don't work that efficiently. And I think that coming up with better interfaces for that stuff. So the tools are really more dedicated to the kind of AI work you're doing is going to be a much bigger boost necessarily than just relying on the AI to iterate all day or, oh, my AI worked for 14 hours and made a PowerPoint presentation. It's like, okay, cool. That's very good that it was able to run for that long, but did it really need to? That seems like an unnecessary amount of time to do something like that.

11:46

Speaker A

Yeah. I also think there's this challenge that there's almost these two modalities with this kind of thinking, which is, you know, you've got the whole like, chat mode, which I would consider that is like a cowork or like a, you know, like a collaborative mode where you're like working with it and it's still able to run small loops with tool calls and skills and various things, but you're sort of cherry picking or helping guide it more and going back and forth. And so you're giving it more agency in the task because you may actually not know what direction you want to go. Like sometimes you don't have all the answers. Like it's the journey, like you don't know the destination. You like kind of working there. And then I think the other piece is, you know, what everyone sort of calls agentic. But I would almost consider that delegation. You're at a point where you're confident enough in what you Want or it's so repetitive and a painful thing that you do that you do want to delegate that task. You want to say, hey, you just go do that. Get back to me when you're done. But then there's that other cowork or like, I think collaborate is like a more normalized way and I think it escapes the chat paradigm of what people think is chat paradigm. And to be quite honest, now in the like SIM link, having access to my local file system in SIM theory, the way I still prefer to work, especially on more complex projects or things that I'm trying to understand myself, is that collaborate modality. The agentic modality I think is really great at certain things like research and, or like preparing metrics or like, you know, going off and creating a bunch of charts or report or like a presentation. But there's day to day is definitely in that collaborate modality.

13:50

Speaker B

Yeah, I think you said it to me this morning. Really all agentic mode is how relentless do you want it to be? Do you want it to just really just come up with a plan, keep going on the plan until it's finished, then at the end do a quality assurance check to make sure it got you what you want it. Like, that's the difference. Whereas the more interactive mode is okay, do a bit, then stop. Do a bit, then stop. And it's interesting because it's quite an analogy to a real life worker. Like whenever I hired someone new, I always, I read it in a book somewhere, but I would always give someone a task and say, work on this for three hours and then stop. And I'll check in where you're at just to see that we're on the same page and you're going in the right direction. Because if you give an untrained person a brand new task and then let them go for a week, you might discover, whoa, this is not what I wanted at all. And I feel like that's the sort of interactive mode where it's like, do a bit and then we'll see if we're on the right track. Whereas once you get confidence in that workflow or that person or whatever it is, you can say, okay, now I really want you to just go off and do this task because I know you know what you're doing. And I feel like when you look at the way agentic works in terms of building up an agent's MD file or building up its own memory or building up its own set of skills and files and stuff, that it needs to do that task, it's a little bit like we've seen in the past working in chat mode, where you build up a beautiful context that you love and you're like, this context is so good. I can just reply to this message anytime and it'll help me with this task. I feel like we're going down a similar path in the agentic world, where you get the right mix of skills, the right mix of assistance that you can reference the right context, the right mix of mcps, and then it can now do those tasks competently and you can trust it from start to finish. So it's almost like a more sophisticated way of building up a set of context and tools that is the right environment to get that task done and then you can give it those tasks from then on and trust it to complete the process. And I really feel like it's that level of maturity of work that is going to be needed in an AI system to get proper agentic going, rather than the I just asked it to build Tetris and it went off and spent 16 hours and built Tetris. You know, like not a task that it's seen before. These are unique business problems or problems that you face uniquely that it's not going to be able to do cold. You need to get at the right mix of stuff to get there.

15:47

Speaker A

Yeah. And increasingly I think that that's where the agentic loops are good in a way where once they've got access to the context, you know, you can send it off and it can go hunt for that context. But then the question is like, you know, in the shorter term, maybe longer term, are people really going to do that? Because it just the cost of it going off and finding it when you can just, you know, quickly point it in the right direction, there's definitely a balance there because it's like, how many tokens are you gonna burn on this stuff? When, you know, is it like. I guess what I'm trying to get at is, is your daily active user coming back every day to use an agentic loop, or is this just an occasional thing? Like, we saw this play out with Chat gbt, like almost a year ago with deep research, where at first people were like, oh, this is amazing. It goes off and it spends an hour or whatever and it produces all this research, but then you've got to like synthesize it and read it and understand it and then source, check it. And, you know, I think people were using it, like for everything and then they just kind of stopped and like the shopping stuff and and all this, you know, these are like disparate, very purposeful reasons to use the AI, but you still come back ultimately to that. Collaborating with AI mode. Like, that's your daily driver kind of vibe.

18:08

Speaker B

Exactly. And I think we've spoken about it before. I feel like without some level of interactivity on your side, you're not invested in the output. Because if it just does all the work and then you get a document at the end of it, you don't really know anything about it. You don't know what went into it, you don't know what the sources of information were. And to some degree, you're kind of just having faith that it did the right things, which I think's okay if it's an established process and you've done that sort of stuff. But do you really want to trust it to then, like, produce that and then submit it as your work without having any level of human in the loop in that process whatsoever? Like, I think there's a sort of risk there of a bit of an intellectual disconnect where you don't actually have any investment in it at all. So the work feels worthless, or at least not something that you can actually trust. Whereas I feel like when you're working in, like, as part of the loop now, you're. You're working towards where you get to, so you have some part of it.

19:32

Speaker A

Yeah, there's so many different categories and ways to look at this. Like, there's definitely the sort of chat collaborate mode where it can call tools and skills and get a lot further than ever before. And I like this stuff. A lot of people on the cutting edge have been using for a while and are just sort of used to this way of working. But I think a lot of people never got there because it's just simply pretty hard to get. Get there. Like, it's not something easy. Then you've got the, like, agentic tasks where it's like, hey, go research this. Hey, organize files on my desktop, whatever that might be. But then I think there's the third layer, and we might see that this year as well, which is the true autonomous, where, like, you're able to design and build an agentic loop that is just your support worker or, you know, your time manager or whatever. And it's just constantly running in a loop, doing productive work that truly does replace some part of a job. Like a function it would have to.

20:33

Speaker B

Right. Like coming back to the cost, even with Claude's max plan. What did you say? It's 200amonth for the max plan and it's unlimited, which means they're taking a loss on that. There's no way. I just know from the token consumption, there's no way you can have people sitting around running these agentic loops all day and do it for $200 a month. It's, it's ridiculous. Like it's obviously a market. Such like they're trying to pick up the market or parlay it into bigger deals. I don't know what their, their strategy is, but you can't do that without Billy's in the bank. Right? Like that's an expensive strategy. Now you think about organizations that have a staff of a thousand people and they want to empower those thousand people with the best that AI currently has to offer to work. Can you afford $200,000 a month for that? Like even at this heavily subsidized rate, I would argue no one can afford that, no matter how good it is. Because you know that the majority of those people are not going to be able to work so effectively with AI that they can do the job of five people. Like it's just any. If they can, then you've got to fire 20% of your staff. Right. Like, I just, I just don't see how it's sustainable using the frontier models to run agentic stuff day for all people who want to do it.

21:31

Speaker A

But yeah, I guess it really comes down to over time, like how many valuable tasks do people do that are replaceable. And like, yeah, as you say, $200,000 a month, if that's what you're looking at is like that's, you've got to be getting significant gain. And you see a lot of the like Twitterati sort of fanboys tweeting like, you know, if you have 10 developers, you shouldn't fire nine of them. If one developer can now 10x, you should get 10x to do 100x and all these like ridiculous statements. And look, I am not debating that it isn't a huge productivity lift. I mean I think it's enormous and the capabilities that incredible. And I, you know, I feel like I can take on anything now and do anything. But there's still that element of like the human in the loop right now and the agency I have and the, the limited hours I have in a day right now to be working with this thing to produce work that's meaningful and I get that. Like, I guess the real question is if the next iterations, it's like it can truly work and do tasks 24 hours a day that can replace people. Can you get it the cost lower than a human salary in tokens, in reality, with some margin for that vendor and therefore it is now worth it. I think that's really.

22:50

Speaker B

And I think this is the argument for multimodal, right? Because I am looking at using this with like Haiku, Gemini, Gemini, Flash, glm, Deepseek. Like using it with models that can be run in a way where you can blast this thing out all day. You can have 10 agents working, no worries. You don't even need to stress about it. And my argument is that because so much of it is around the prompting and the tooling and things like that, we're reaching the level where it is realistic to do this on the lesser models because the model's own knowledge of things is becoming less important. Like it doesn't need to know who won the Nobel Prize in 2004 in order to get tasks done.

24:17

Speaker A

Who was it out of use?

25:02

Speaker B

AI. But my point is that I think.

25:04

Speaker A

That wasn't it our love rat.

25:08

Speaker B

Hinton.

25:13

Speaker A

Remind me at the end of the episode to tell you a story about Geoffrey Hinton.

25:13

Speaker B

He's a bad guy, isn't he? Is that. Is that libel? Libel's in writing, isn't it? That's just slander.

25:18

Speaker A

I feel like he's got enough reasons to sue us now. I mean, like, we made T shirts and we've publish music about him.

25:23

Speaker B

Truth defense, man. You can't get us if we. If we're telling the truth. So, yeah, he's a love rat. I'll go on the record with that. He said it himself. He's like, why would I stick around? I'm an AI Godfather. Anyway, my point is that we need to look at the cheaper models and how can we optimize them to work in the ways we've been talking about this morning. Because is important that organizations get all of their stuff using this stuff. Like, I don't want this conversation to come across as I'm skeptical about the abilities of the technology. I think it's amazing. I use it every day. I think we're still learning the best way to work with it. But you've got to do it in a way that you can get more of your team using it. And I just would argue that you're not going to get the productivity gains until you learn more. And who can afford to do that? Like, especially. Especially what the people on Twitter are saying. They're like, no one can be doing this. Like, someone said they wrote like 95 million lines of code and unit tests for everything and all this stuff. I'm like, how much did you spend on that? Like, it, it probably cost more than actually developers did. So I, I just, yeah, I just, I just wonder about that stuff. Like, it's a bit hyperbolic right now what people are saying, and I think we need to be realistic around what's possible and, and, and think about ways to do it in a sustainable way.

25:29

Speaker A

Yeah, I think those scenarios too. It's like bring receipts, like, what, what did you actually do? Because I, I don't understand the sort of cognitive, like there is that cognitive challenge too. Like even now I find it if I've got, if I'm working like four tasks over four tabs, right. I don't know about you, but I get distracted. I go down one path and then I come back to the other tabs eventually. But then I have to re, like restore my own RAM on those tasks and try and like figure out, you know, where, where I am and then get on with it like this. And even if those are four agentic things operating, doing different pieces, I've still got to bring it together as the human. Like, no one's going to accept at this point my AI agent emailing a bunch of people, being like, you know, here's the meaningless report that I haven't looked like.

26:55

Speaker B

Yeah, I mean, we look forward to the days of apology emails from companies saying, oh well, AI did that, it wasn't us. But the other thing that I find similar to what you're saying is when I work, say agentically on code, and the AI decides to change 14 different files, right? All key sections of the code in 14 different files. And I'm just like, YOLO, AI is the smartest thing ever commit. And then bugs come out later and you realize that there's problems and things like that, you're like, okay, now how do we unpick this? I've done 47 commits today, all done by AI. I've changed 4,000 lines of code. Let me now painstakingly go through and work out which one of those lines of code broke my other thing that it didn't take into account. Because the reality is, no matter how good the context window is, no matter how good the code searching is, no matter how good even automated testing is, unless you've got some pristine code base with like perfect integration and unit tests for everything, you're going to break stuff. And I just feel like these, a lot of the examples are around these pristine brand new projects that don't have any issues because the AI has written them from scratch. And it's not taking into account all of the vast majority of people who are working on existing stuff where you can't just do that.

27:45

Speaker A

This is, this is a really interesting lead in because I think there's sort of two emerging trends and they overlap a lot here. The first is I believe the AI workspace and I have for, I mean you can go back like a hundred episodes how convinced I am of this. But I do believe slowly these AI applications like chat, GPT's, Claude, SIM theory, whatever they're starting to become in people's lives. The Everything app in the sense, like once you get like email, calendar files, like all this stuff in there, your context connected to your computer, like that becomes the sort of like that's the console of the future. That's the place where you interact and work and get work done and possibly collaborate in the future, you know, whatever it is. I do think increasingly that will start to be very competitive in the stacks that people have in businesses, where businesses are a point where they're like, well, why isn't my CRM on this? Why isn't my email on this? Why don't I just connect to Snowflake? Like why do I even need the interface somewhere else? I think not even Snowflake. I mean, just spin up a redshift or whatever. I think that's going to happen. And the reality is these AI workspaces increasingly have roles and permissions and users and all of this sort of enterprise security and architecture already built out. So once you then start placing applications in that workspace in a controlled environment, you can like modify and screw with, but it's in a controlled way. I think that starts to make sense and starts to take out a lot of the common business SaaS applications over time. I would argue though, that's like a five year, ten year, like it's going to take ages to get people to that point. I mean people are still struggling in a workplace to get people to use AI efficiently. I read the other day that like 70 something percent of enterprise AI projects have failed and are being abandoned. 40% of those organizations are not even bothering anymore with AI. Like, like this is going to take a lot longer than people think. But the other trend outside of the Everything app is, is this like sort of feeling. And you can see this in SAS stocks on the Nasdaq right now. SAS stocks are down. People are saying is SASS dead? And they're thinking the threat comes from things like Claude Code where an enterprise will just like get an agent to go off and spin up like magically any software they need. And then there's the counter narrative or counter argument to this which is like, have you actually tried to do this?

29:10

Speaker B

Yeah, and I think I mentioned this to you off air, but this idea of surface area, like okay, yes, as an organization we could clone every SaaS app we use. You and I have the ability to do that using AI, Right. Like we could definitely do it. The problem is how many pieces of software am I then maintaining? I've got our help desk app, I've got our, you know, MRI graph app, I've got our payment gateway.

31:37

Speaker A

But I spun up 16 sub agents to do this for you.

32:06

Speaker B

Yeah. And it's like, but so then, okay, so I'm now even in a perfect world where the agents are able to perfectly maintain the software. I'm burning millions of tokens a day to honor bug fixes and feature requests on these 16 pieces of software. I'm running. Like it's, it's not within the capability of even a highly technical organization, let.

32:10

Speaker A

Alone that has ever been in an enterprise or even a mid sized business where people have decided to build a project or software internally like their own internal CRM. Like how many failed examples do you personally know of that? Like I, I can probably fight 100.

32:35

Speaker B

We alone, we alone have done that four or five times trying to build something and failed, you know, like because the team just isn't capable or we can't maintain that much stuff or it's just way harder than you think. Like people, I always say this every time anyone wants to rewrite a code base. I'm like, do you know how hard it is to get a working system that works in production at scale with thousands of users? It is not easy. There's so much learned wisdom in those code bases that you just can't make it from scratch if it's got any level of complexity to it. It takes a long time and a lot of money.

32:52

Speaker A

I think though, to, to the is SAS dead point, right. There probably is a number of tools that if you had the resources internally you could as you say, like these small tiny applets, call them where you might be paying for like, you know, so a good example might be you're paying for some like windows like 98 software still that you have to run because it was the only app for that thing ever made and they've got a hold of you and you've got no negotiation power, whereas now you kind of have increased negotiation power. Because you can say, well, if we really want to, we can go get an AI agent to go clone this software and make it work for our team. We don't want to, but we can do this for this cost, like, you know, roughly the budget to maintain it and then build it. And in, in large enterprises you could have a team of like 10 people that just like an internal shop that just vibe codes and maintains. Right. But you could kind of do this now as well. Like.

33:28

Speaker B

Yeah, well, and then that, that sort of leads to. Okay, well, I feel like, and I've said this over the last little while, so far, the whole MCP concept is immature. The, the MCPs are not nearly as good as they could be as they become better and people have these really, really nice MCP interfaces. Then you can actually start to replace your SaaS apps piecemeal. So like using say HubSpot as an example. Right. Say you've got a really good HubSpot MCP that can like store your CRM contacts, retrieve your contacts, send emails, send mass emails, do segmentation of your audience. Right. Say it can do all that. Well, maybe at some point you just replace one of those tools with every time it was going to update a contact. And I know I'm stealing your idea from a previous one. It's now updating your own database and gradually you bootstrap your way away from HubSpot into your own thing that is running just by mimicking the interface of HubSpot with a, with a new AI created backend. Like those kind of things will be possible, but at the same time you can probably cobble together the equivalent functionality of a lot of these softwares through a good combination of MCPs.

34:32

Speaker A

And I think maybe, but I would argue this is why I think the everything app argument is probably the thing that's going to happen. Like if you're running like ChatGPT Enterprise or Claude or Simdia or whatever, like what's to stop those solutions seeing like, okay, a lot of people run Salesforce or HubSpot or, you know, they're using Gmail or whatever. Let's just slowly bake like so you just one day it's like now this is like instead of calling the mcp, it's just we've been sucking this data in for long enough that your CRM and your email and your calendar, like everything's in this account now because it can be like agentic and AI first or whatever. And you can turn like what is like five subscriptions into one AI. Subscription. And I think that's what Microsoft with Copilot, it sort of tried to do, very unsuccessfully, right? Like, because they were so they couldn't rebuild from the ground up. It's like they already had. They've got to sell Outlook and Excel and like all these things. And then like, like I guess to me the war's already won. It's not about baking AI into these apps. It's about a single point of control. Like no one wants to go into HubSpot to use their like buddy like code buddy. Like they all have these stupid names. Sasha's your AI agent in this app.

35:50

Speaker B

And you're like, but Sasha doesn't know detector company. Like the people who check my smoke detectors have an AI assistant. It's like, what's that gonna do? There's a fire. Shit, get out.

37:13

Speaker A

So I, I do think that this is what the, the, you know, a lot of people are seeing. But I like, I think the biggest threat from these companies, like if I was them, is to go after those core. You know, it's, it's really like this, this whole AGI thing is going to come down to Microsoft Office. Like who can win the Office Dollary dues? And then. Yeah, so I like, I think there is that everything app. Could it happen maybe do they have app stores and this just becomes the operating system. So it's not the everything app, it's just an OS. I know ChatGPT is recruiting right now to build the OS of the future. I would say that there's no chance in hell they will pull it off. But let's see. I just don't think they have the DNA in that company anymore to seemingly do anything interesting.

37:27

Speaker B

And I also think we'll see a rise of bespoke, like really high quality, not bespoke, but really high quality MCP slash skill sets that are paid that really will supercharge the corporate use of AI systems like SIM theory, like Claude, like OpenAI where you plug this thing in and it actually gives you all these amazing abilities for your business that are trusted, that are secure, that kind of thing. Not just some random GitHub project you install and hope that it works. Right.

38:16

Speaker A

Like this is the other thing, right? Just on that, like you were talking before about using models like glm. There's a new deep seat model rumored that's coming out that they say is really strong at coding and they're claiming it's like on par with, with the like the Claude code opus type stuff. I mean you've got to Imagine that that's going to happen this year. Like one of the open source models is going to be comparable, if not with a lot of tuning, probably comparable. And the moment it does become comparable in that agentic loop sense, then, you know, a lot of this stuff becomes down to. Well, yeah, like you said, costs. Like if you're an enterprise and you can run all this stuff on your own cloud with your own sort of user interface as well. Like you start to be like, well, maybe they want to control their own agents and maybe it's not a generic thing. And we're back to that problem that we, I felt like we were talking about 12 months ago.

38:51

Speaker B

Go. Yeah, I mean like you said, you for your personal convenience added like email and calendar support to SIM theory, right. Like I can imagine companies will want to be able to configure the user interface. So if you're like an airline and you want to change someone's flights and you can have a little button that's like change flight. And then it's an AI interface dedicated to that. Like that's where I can see the whole MCP UI or these customized UIs like coming into play, where an organization can actually have stuff that makes it more efficient for their staff to interact with the AI.

39:49

Speaker A

I hate to make the analogy though, but it's sort of like, because I despise it, but it's sort of like WordPress plugins at the end of the day, right? Like instead of mcp, it's almost like you want an SDK to build on where you can have the core system you'd like install this WordPress onto your own cloud servers. And this is, this is honestly kind of what we offer with SIM theory now to enterprise, where you put it on your own infrastructure and, and you own it like it's the. Then you can start to tweak it and build on it in a controlled way. And I think that building in a controlled way on top of these things is highly likely to succeed, especially in the enterprise because you know, you get, you get, you inherit the security, the roles and permissions, the structure of the app, the place where people are already.

40:23

Speaker B

Logging in, access to your own data.

41:15

Speaker A

Yeah, and, and the, like, the, the safety and security of like a closed wall ecosystem versus like how many of these apps are my team going off and using because they can't get what they want necessarily like internally. So it's not like, I don't think over time like everything's necessarily going to converge in this space place, but you can See like the biggest companies under threat right now are probably companies like, like a Salesforce, like a HubSpot and, and then I would say like somewhat Google Workspaces and, and Microsoft if they don't get their act together. Because you can see even now like if someone's spending more and more time in say Claude or Chat GBT enterprise and they slowly introduce like their own emailing calendar that just work with Google Workspace right out of the box. So out of the box they just work and it's really integrated. There's an interface, but it's like AI first. Then one day they say you know what, like you're already paying for works like why, why have two licenses, just switch your email, click a button and. Because they've been storing all your email and stuff. Like you're, you're like I think that's the path, like that's the, that's the switcheroo of, of shifting the economics from one.

41:17

Speaker B

And I also think the other part of it is systems that can work like you say with the cheaper self hosted models and other ways where you're not having a business getting completely fleeced on the token usage. Like it's not like okay now you have to spend $300,000 a month, but your staff are supercharged. It's like okay, no, you got to spend 20,000amonth, but your staff basically have uncapped usage of all of these tools. They can just do it nonstop. They can have 50 agents running all day and it makes no difference.

42:32

Speaker A

And I think this is the other problem that the model lock in as well. Like if you build out your whole stack around the single model, it's a single point of failure. It's also as we saw last year, like these models do change often. Like they're a better model just comes along all of a sudden and then you're locked into this one, they're charging you higher.

43:05

Speaker B

Existing model becomes worse as it has Google, right? Like where they just completely shell the thing out. And something that was like my, basically my girlfriend is, she's dead. You know, like it just doesn't work.

43:25

Speaker A

Do you want to know something interesting? Like we, we, we basically like a bunch of credit cards. We have the power. All of our billing got hacked last week and our bank decided to shut all of our cards down, including our backups. Thank you bank, we love you. But anyway, during that time some of the models stopped working for me locally because I run them through different accounts, right? And I was using Opus quite a bit. And then I Switched back to, I switched to Gemini 3 Pro and a lot of people have been complaining about that model. Like, oh, it just gets after about three turns it just loses the plot, it doesn't know what you're talking about and it just repeats the same sentence stuff and similar to you for in the early days I was like, it must be something we're doing in sim theory. Like it must be a bug. Like there must be something about this model that, that's not working correctly. And we dug into it and it's not, it's like it's doing, it's doing what it's supposed to do. And then I started googling around and sure enough on Reddit there's like a ton of posts, people saying the exact same stuff. The thing just like after a few turns it's just dumb. And I notice if you just start using it differently, like you try and one shot more with it and then quickly pivot if you don't get what you want. It's quite effective, albeit very slow. And then I was so frustrated that I was like I need that turn by turn based model. I switched back to Gemini 2.5 Pro. And unequivocally I can say hands down, I still think it's a far better experience and model than Gemini 3 Pro. Even though weirdly Gemini 3 put Google back on the map. So I'm not like that's strange. And then I even was like one better, maybe I'll go to Gemini 3 Pro Flash. And I think Flash again, better model than Gemini 3 Pro. I'm putting it out there. I don't know what happened with Google. Like I think Gemini 3 Pro has legs. Like I'm hoping when it comes out of preview they just solve this problem. Like it, like if you have long context and you like one or two shot in a problem, it is the best model right now, hands down to use 100%. I use it all the time to get myself out of trouble if Claude Opus gets stuck in a loop and it does often where it just can't solve the problem. And it keeps trying. It keeps trying and after a while you want to just bang your head against the wall and let it smash in. That's when I go to Gemini 3 Pro. It's my like YOLO Hail Mary model right now. It used to be GPTs, now it's Gemini Pro 3. But as you say, not a daily driver anymore. It's not something you're gonna like live in.

43:39

Speaker B

Yeah. So I think that's, that's the Thing it's gotta be this model flexibility and having that ability to change when you need to do so. And I think just coming back to cost because I really do think it's a big, it's gonna be the big thing this year because people's cogs line in their organization is going to be made up of a lot of AI inference now I predict or if it isn't already. And I think that being able to have control over that, have a fixed cost on that line item will be a very important factor for businesses who want to mass roll out to their staff. And honestly I feel like it's going to become an expectation. Just as we've discussed, employers should be asking in job interviews, show me how you work with AI. Like I really do think that should be a key part of every job interview now, no matter what the role. Maybe not fruit picker, but you know, a lot of jobs. I also think it should be the employee like potential employees asking the organization, what's your AI strategy? How do you guys work with AI? Do you provide me with models? Like what, what do I get if I come here to work with? Because otherwise what happens? People have to do it at personal expense, bypass the company's security controls, things like that. I just think it's going to become such an expected part of work that organizations need a defined strategy around what they're going to do that they can stand behind and say why. And then on top of that, I also think that limitations are a big thing. It's like imagine you're expected to use AI at work, but you've got a limit, like there's a daily limit of how much you can use it. Like that's really going to affect your job. And I really feel like everyone talks about Vibe coding, Vibe working, Vibe co working, whatever all this stuff is. The only way you can Vibe is if you can do it as John Carmack would say fearlessly. You need to be able to do it without worrying. If I get this one wrong, that's it. Like you need to be able to just. Or like, oh, I'm costing myself a fortune trying to figure this problem out. Like it needs to be something where you can do it freely. And the only way you can do that is to have the lesser models working in a really efficient way.

46:26

Speaker A

I did see John Carmax now Vibe coding as well, by the way. Oh, is he even. He is a convert. This is so this is one of the other things and I feel like this is like a catch up special. Sorry to the Guy that likes us to stick to an hour.

48:36

Speaker B

But everyone, everyone said, screw that guy.

48:49

Speaker A

Like, look, I'm open to feedback from that guy. It's. It's fine. I get it. We ran a lot. Whatever. So the, I think the. I've actually lost track. This is, this is.

48:52

Speaker B

Why should we pivot to Wuthering Heights world that I created?

49:07

Speaker A

No, we're not pivoting Weathering Heights world yet. But I, my. I think you're right. I think there's a, there's definitely a cost justification in terms of like, what benefit do we get? But I like, surely, like, if I was in charge of an enterprise right now, I would be like, you know, let's just invest and figure this out and like, think about all the wastage in the enterprise of projects and consultants and all this stuff they spend money on. To me, I would be happy to take some of that budget and turn my own team into consultants. Like, here's all the tools. Like, just have everything, have all the models, have any MCB you want, go nuts and then evaluate that as a pilot and invest in training. Because to me, it's like giving them co pilot and saying, good luck. It's not. I mean, that didn't work. And that's what I think everyone's seen.

49:10

Speaker B

Yeah, I agree. I meant to make that point earlier. I. The other thing, I don't know why I'm making predictions for this year, but whatever.

50:07

Speaker A

I mean, it is the start of the year. It would be the time to do it.

50:14

Speaker B

Okay, good. So I think that this year we need to see more training. We need to see more people talking about how to work with the technology instead of just showing off. Look at what I did. It needs to be like, here's some strategies you can use to do this better. And I know we're going to have a whole world of the SEO consultants of AI, as in, I am the prompt king, I am the prompt master. Talk to me. But I think what we need is sort of formalized enterprise training where you can take a thousand people and train them on the best way to use it within your organization and things like that. Because I think without that, you're going to remain a group of people who feel alienated from what is possible with the technology. And I think that part of it, as we always say, like, anytime anyone asks for advice in our this day and AI discord, my response is always, go try it. Just go try it. And that's a good way to get started. But it needs to be deeper than that. I think in order to get the most out of it for organization.

50:17

Speaker A

Yeah. And I think also realizing there's definitely these three different aspects to how we work with AI today. There's that chat collaborate methodology that's getting more advanced as it has access to MCPs and internal knowledge and context. There's that aspect of it is the agent where it's like these laborious tasks that I just don't want to do like some analysis and creating a spreadsheet template or modifying a spreadsheet or whatever it is. It's like, hey, you go do that. I'm delegating. I would still argue you can do that in the collaborate chat mode. Like the blurred lines between agent and just tools in a loop. Anyway, so there's those two aspects, but then there's the third aspect. And I think this is where a lot of people get confused, which is I want just to, I just want automation that uses AI. Like I want to have an inbox and have it monitored by an AI that answers questions and escalates when it can and or does some sort of repetitive process autonomously. I think they're three different categories and people just seem to blur them and then, you know, they try that third category maybe with the current tools and like it can't do it. So it sucks.

51:21

Speaker B

Yeah, I think you're right. There's like that classic meme with like the missing step. You know, it's like we, we think, okay, if we get it to a certain point then it'll just do all of our work for us. But it's like we need to actually get the mechanics of that middle step going before it will do that. But I agree, once people experience a proper agentic workflow that can be run in a predictable way, the next step is okay. Now I want it to be event driven. It has to happen every time this event occurs. And I think that's where we will see some major productivity gains in terms of semi replacing parts of people's jobs once we can get there reliably.

52:29

Speaker A

I think the other thing really is, and maybe this whole episode it feels in a weird way like we're poo pooing a lot of it. But I'm more bullish than I've ever been before about this stuff. I just think that a lot of this change is these are like step changes in gradual things. And you know, at least from what I see, keeping up to date with it, like it's just going to take a lot of time and a lot of problem Solving to get to a point where I think we all believe we want to get to, which is I have this collaborator and co worker through the day that I can truly work with and understands my context and I think we're there to some extent. Like, I feel like that with some of my assistants through the day, but I just think there's a long way to go before this is like embedded everywhere in society and it's the end of Sass or like it just feels like there's so many things, like there's so many layers to this that.

53:07

Speaker B

Well, the other, the other point just to make about the SAS as well is that something we've seen and I don't want to completely whinge, but I will. Companies like Atlassian claim to have an mcp, but unless you're part of their elite group, you can't use it. Like, we've had so many people say, can you add Trello? Can you add Atlassian? We can't because they won't whitelist us so we can't connect to them despite having an mcp. Now what motivation comes from that? It's like, well, let's just replace it. Like, how hard would it be to replace a Trello like project as an M C P interface? It's not that hard. And I feel like if a lot of these companies don't get their act together in terms of being open about access to their platform or saying, we're not going to be open and we're going to have our own alternative. You must use. If you want to use our technology and die on that battlefield, then people are going to replace it. Like there's going to be dedicated paid MCPs that offer that functionality as an alternative to the existing incumbents who don't want to let their data out.

54:07

Speaker A

Yeah, to me that's where there's a disruption layer where people can build businesses, which is like you build, you just clone Trello, put all the user roles, security permissions, features in, make it AI first very open and, and, but as you point out scale.

55:12

Speaker B

If you use an existing AI platform, you've already got all that. Like, at least on Sim theory you do, you've already got roles, permissions, user management, ability to revoke data security, all that stuff.

55:30

Speaker A

So you really just want. I think the SDK is coming, I think the SDK of these platforms is coming that like the future of building apps will on the platforms, like you will build add ons, like actual software add ons to chatgpt. It won't be Like MCP ui. I still think that's the dumbest thing on earth.

55:41

Speaker B

Or as we've discussed, like controlled computer use through local MCPs through something like Simlink or the Claude desktop app, where, okay, so we're not going to replace the software, but you can control the software through your AI platform and control existing local software either on your own computer or a dedicated like, you know, computer that you're running just for that purpose. So then, okay, well I can't use Trello, but I can use a desktop based task management system or some other system or Trello through a browser for example. And I'm just only harping on that example to give a consistent story. But, but the idea being that you will still, you'll still be controlling software through a centralized point. Like you're saying, don't you think the.

56:00

Speaker A

Modern app exchange like in Salesforce, where they have their dumb business apps and they're not really apps and you know, they tax people like hell to be in there. I know that from firsthand experience. Don't you think this is just what's going to happen with AI platforms, right? Like they'll have an SDK. I mean, I think Chat GB is probably doing this now. Then there's apps on the platform like, you know, and then some AI dev can come and like Vibe code with that SDK, build Trello with like ISO, whatever and all the different specifications, roles, permissions, payments, like everything. And then they just charge like, oh, it's like $2 per user of this Vibe coded app and they maintain it and like the company doesn't have to worry. I can see that playing out like maybe this year and then that is the beginning of the end or disruption of SAS in my opinion, or some elements of sas. I don't think it's like fully dead. I think there's probably just new opportunities where you can go and disrupt all these companies by being like cheaper, faster, more connected, more AI. First like to me calling it dead is like quite frankly.

56:48

Speaker B

And I think, I think it'll be the same for data based, data dash based mcps in the sense that say I want to build a context to solve a certain data related problem. Right now the solution for a lot of people is crawling. It's like, oh well we're going to crawl the web and like try to extract data with Firecrawl and it gives you structured data back. And the reality is that that sort of data gathering is still quite messy and unpredictable. Right. I can imagine there will be companies that are like Similar to like in the sports betting arena where you can pay like 10 grand a month to get all of the data from all everywhere for all the sports or whatever it is. Imagine an MCP that's paid like that where you just like bang, add it to context. Or you can select a specific area of data you want to add to your context and it comes in but you're paying for that privilege. Like imagine in the finance industry, I imagine there's like all the bonds information and I know we have MCP's that do some of this already, but I mean paid proprietary data sets that are optimized for AI that you pay for. Like I can't believe we're not seeing more of that already because I think that there's just so much scope for that.

58:02

Speaker A

Like imagine, I think we're not seeing it because these companies have done such a shitty job of distribution of these mcb. Like they haven't told the story, they haven't built it right. Like chatgpt is gone off and tried to build an app store around it when it's not an app. Like it's just a source of data in action. Like the real apps are still user interface just built on top. Like standing on the shoulders of this.

59:15

Speaker B

Yeah, but I mean like I don't want to, I don't want to really promote like LinkedIn is like the pinnacle of data. But imagine if they like they're always trying to sell people their products. Why not just have an MCP that's like you can just say an industry and it'll help you vibe out leads and emails to those people in an AI platform. Like that alone would be worth a ton of money for people looking to lead build. It's like, oh, just type your criteria to the AI of the people you want to target and it'll give you a bloody table of those people that you can then get. Like that alone would be, would be massive and there'd be so many organizations sitting on these mountains of proprietary data, especially in industries other than tech, that they could make proprietary MCPs for that competitive companies could then add to their AI platforms and absolutely dominate like the, we know that the, the building of a good context is important. Mcps that can do it for you, even if paid are just going to be so valuable.

59:41

Speaker A

Yeah, exactly. Is this a good time to promote our LinkedIn group? Because it could be.

1:00:43

Speaker B

Did I even join it?

1:00:50

Speaker A

There's 165 people in it and I mentioned it once, so. So average AI user group. I'll Put a link in the description, get on there. I'm never. I hate LinkedIn so I won't be there but I'll occasionally check. I just checked then and so join, join the group.

1:00:51

Speaker B

You know what, I'm gonna, I'm gonna start a post on there about this very topic. Like what proprietary sets of data do you think would go well as paid mcps? Because I really think we've got enough listeners where in their respective industries there's huge opportunities to do that. I personally know some people who could be doing that right now and I think it's, it's a really, really good way you can become a major player in this industry.

1:01:08

Speaker A

AI. AI LinkedIn group. Link in the link in the description below. I. It could be good. I actually think it could be good. There's some really interesting people in that group if you look through it. So yeah, consider, consider joining even if you don't use LinkedIn. Just drum up the numbers. Let's try and get to like 2,000.

1:01:34

Speaker B

Yeah. This whole podcast was all just detected. To get more followers on LinkedIn, I'm.

1:01:51

Speaker A

Gonna release a training course on there. Some sort of like hype boy, like one of those, you know, like make.

1:01:55

Speaker B

Sure you put spaces between every sentence.

1:02:01

Speaker A

Here is 20 lessons I learned from my kids soccer match this year about AI.

1:02:04

Speaker B

I was on my laptop at my wedding deploying crucial software updates.

1:02:10

Speaker A

Yeah, okay, so I do want to detour a little bit to something someone sent us. Let me. I didn't bring it up or prepare so please, please hold. But back to Jeffrey Hinton. You mentioned that our boy Jeff, he was actually in Australia recently. Not that long ago. It was like 11 days ago. He was in the city of Hobart. For those that don't know, Hobart is in Tasmania, which I thought was a separate country until recently. It's a small island at the bottom of Australia. And he was at this, you know, he's doing gigs in Hobarts. I'm not sure how well it's going for poor old Jeffrey, but he was in the city of Hobart at Invest Hobart. The future looks good from here is the slogan of the. The presentation. And I'm going to try and play the X that I didn't queue it up properly, but I'd have. Okay, hang on. So I didn't.

1:02:15

Speaker B

I heard that if that helps. When it's in a voice, it's no longer English. We won't know what it's thinking.

1:03:10

Speaker A

Thank you, Professor Hinton.

1:03:17

Speaker B

Now I think for the purposes of the lecture event, now we're going to have to wrap things up.

1:03:19

Speaker A

Are you happy to stay around for.

1:03:25

Speaker B

A little while afterwards for any burning questions people might have? Actually, I'd rather get back to writing my book. Okay, no worries.

1:03:26

Speaker A

That is what a. That is real. That's like a real. That he actually said that.

1:03:36

Speaker B

I don't care. What do you mean? It's internal voices English. No, it isn't. Didn't he invent this crap?

1:03:44

Speaker A

Yeah, I. Anyway, like, I think he's. It says a public talk by Nobel laureate Professor Jeffrey Hinton. And this is. Someone sent it to us in our community that at the end he basically just like refuses to. To do.

1:03:52

Speaker B

He must be so like, if he's doing that, he must be hurting for money. Like, imagine how annoyed he is that people are getting billions and billions of dollars and he has to fly to Australia to talk to some people he clearly doesn't want to be. I don't.

1:04:06

Speaker A

I don't think he cares about money. I think this guy cares about being relevant. Like maybe he's just there for the women.

1:04:21

Speaker B

Like he, he seriously might. Like, based on his previous statements, he might be just out there looking to upgrade his girlfriend.

1:04:27

Speaker A

Yeah, maybe he thinks the ladies in Tassie, like, maybe the ladies in Tassie. Yeah, he could be that. That could be Tasmania. Maybe we need like the Tasmanian version of that song. We will see. All right, I think that'll do us. I'm kind of done here.

1:04:35

Speaker B

Oh, yes. What a triumphant start to the year. I'm bored, Mike. Do you have time for questions from the audience or you got to get back to your book?

1:05:05

Speaker A

Do you know what's funny? I do need to get back to Sim Theory v2, which we said we would release as a holiday update and then reneged on that and then said it would come out sometime eventually. Do need plastic.

1:05:14

Speaker B

The Sim Theory.

1:05:26

Speaker A

Classic ass. There's actually people in the. In the LinkedIn Discord being like, do you guys have a plan with Sim Theory? Shouldn't everyone be using this? LOLs. But yeah, if you do want to sign up, there's my plug. Sim Theory AI go there and, and get access to all the, all the stuff we're talking about and, and more of it soon. But we do want to step it up a notch this year. One thing we, we really do have planned for the first time ever is we're going to go on tour. It's called the Still Relevant Tour. And I think later this week we'll record another episode and next week it'll probably. I'll probably drop it but there'll be a sort of register your interest form. We actually want to get a sense of like where the clusters of our audience are that would be interested in attending an in person event. It would be a chance to meet other members of the this day and AI community. And also we'll probably like record a live show or do something like that while we're on the road. So if you are interested, tune in next week and I'll, I'll put a link in the description and you can fill that in. If not LinkedIn, group, yada yada, etc, like sub whatever. Chris, any final thoughts? First week back?

1:05:27

Speaker B

What? Yeah, I think that I, I think I've made my thoughts clear on it. I think this is the year where people really need to think about how they work as an organization with the technology and what their own strategy is for it. Like it's a good time of year to be thinking that. I don't think we're going to see so many like mind blowing updates from the various labs. I think it's the year of building and software. I know we sort of said the same thing last year, but I haven't been in a more confused time in terms of the way people work with this technology. I don't think even Claude themselves, like who arguably are leading at the moment in the software front, I don't think even they know the right way to work with the technology just now. I think everybody's still learning so I think everyone needs to look at like what are you trying to get out of it and how can you empower the most people to be using it? Because as you said, I don't want to come across like I'm doubting anything. I think it's unbelievable stuff. I'm so excited to be back and working on everything. It's just that I really feel like it's the most confused time about what, what your daily workflow looks.

1:06:39

Speaker A

Yeah, I would agree with that. I must admit over the break I was working on like a number of different things. Like I was doing things like uploading my bank statements into the AI and getting it to like basically shame you. You shame me and roast me and like create like a nice budget and explain like where my money's going in charts and things like that. And I worked on that project and I was throwing that into like a folder in the updated version of SIM Theory, which made it easier to then come back and like be like, you know, pin that folder into the context and then work with it. And evolve that out as, as a project over time. And I thought, oh, this is, you know, this is a really good way of working. And then I had emails related to that. Then I got the email in there. So I think really starting to look at the software design and build things out about how it is individuals work. It's just so critical that the people that are building this software are working in it every day. And I think that's why board code and cursor and those applications have been so successful because it's like developers building tools for developers. And so I think as more white collar and like knowledge workers build tools for knowledge workers, things will get far better. And I'm excited. I think we're going to see some great models. I really can't wait to see this deep seat model. I hope it's good. I hope it's good. Rumors it's good. So we'll see. All right, thanks for listening. We'll see you. We'll see you this week because we're going to do another show. So we'll see you soon. Goodbye.

1:07:48