Practical AI

2025 was the year of agents, what's coming in 2026?

51 min
Jan 9, 20264 months ago
Listen to Episode
Summary

The hosts of Practical AI reflect on 2025 as the year AI agents became mainstream, discussing the transformative impact of agentic workflows when properly implemented with domain expertise. They explore the shift from focusing on models to building integrated AI systems, while looking ahead to 2026 trends including infrastructure challenges, power consumption issues, and the growing complexity of AI ecosystems.

Insights
  • AI agents require significant domain expertise and proper integration to succeed - many failures stem from poor processes rather than poor technology
  • The bottleneck has shifted from GPU availability to power consumption, creating geopolitical implications as nations compete for energy resources to fuel AI development
  • Multimodal AI adoption is primarily input-focused in business contexts, with most outputs still being text-based rather than rich multimedia
  • The most valuable skill set emerging is AI system integration - connecting tools, databases, and services through orchestration layers rather than building single models
  • Model performance has plateaued while predictive AI continues advancing rapidly, suggesting the future lies in sophisticated tool orchestration rather than better foundation models
Trends
Transition from model-centric to agent-centric AI implementationsPower and energy infrastructure becoming the primary constraint for AI developmentCommoditization of AI models with differentiation shifting to system integrationRise of reasoning models introducing latency trade-offs for enhanced capabilitiesGrowing complexity and fragmentation of AI ecosystems requiring specialized integration skillsPhysical AI and consumer-accessible AI hardware entering mainstream marketsGeopolitical competition intensifying around AI infrastructure and energy resourcesEnterprise focus shifting from AI adoption to AI system compliance and securityEmergence of AI maker era with affordable embedded AI capabilitiesPredictive AI models continuing rapid advancement while generative AI plateaus
Quotes
"Clearly some powerful alien tool was handed around, except it comes with no manual and everyone has to figure out how to hold it and operate it while the resulting magnitude 9 earthquake is rocking the profession."
Andre Karpathy (quoted by Chris Benson)
"In the matter of six minutes it produced what I would have at least six weeks of work, at least six weeks of work in just a matter of a handful of minutes."
Chris Benson
"AI doesn't solve problems. That problem."
Daniel Whitenack
"2026, probably the fastest moving AI year ever coming up here."
Chris Benson
"The most relevant thing is, you know, flexibility, not getting lock in, the ability for you to use a bunch of different models, the ability for you to, you know, construct a system."
Daniel Whitenack
Full Transcript
4 Speakers
Speaker A

Welcome to the Practical AI Podcast, where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work and create. Our goal is to help make AI technology practical, productive and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn X or Bluesky to stay up to date with episode drops behind the scenes and AI insights. You can learn more at PracticalAI FM. Now onto the show.

0:04

Speaker B

Welcome to a new year of practical AI and an episode with just Chris and I where we try to keep you fully connected with everything that's happening in the AI world, which is a lot these days, both last year and this year. But I am Daniel Whitenack. I am CEO at Prediction Guard, and I'm joined, as always by my co host, Chris Benson, who is a principal AI research engineer at Lockheed Martin. Happy New Year, Chris.

0:49

Speaker C

Hey, Happy New year, Daniel. It's 2026, probably the fastest moving AI year ever coming up here.

1:20

Speaker B

First. Well, every, every new year, I guess, has been the fastest AI moving. Well, I guess since the podcast, you know, whatever, eight years ago.

1:29

Speaker C

It was a safe thing for me to say. There you go.

1:39

Speaker B

Yeah, yeah, yeah, safe thing for you to say. I mean, granted, these last few years have been a little bit frantic in relation to the years prior to that with the podcast, which felt, you know, in retrospect, seem a little bit chill. But yeah, it, it definitely seems like 2025 was a big. 2026 will be a big year. And so as we're coming into the new year for our listeners, usually we try to do some type of. We don't have a strict format here because we're pretty casual, but some type of discussion of things that happened in 2026, you know, themes looking forward or things that happen in 2025. I'm already a year ahead, I guess. Things that happen in 2025 and, and things that may or may not happen in 2026. Usually our predictions are wrong, as are all predictions.

1:42

Speaker C

I'm okay with that.

2:40

Speaker B

All models are wrong. But hopefully this, hopefully this podcast will be useful. Yeah. So interesting, interesting times, Chris. Interesting dynamics in our world in all sorts of ways. But if we, if we hone in on AI, I think at the, at the beginning of last year, if I'm remembering correctly, there were a couple things we talked about. One of those things, or certainly at least a theme that we've talked about a lot this year, which if we were to categorize the year 2025, I don't know if you would agree, Chris, but it does seem like the year that we transitioned to talk about AI agents, it was sort of like, for a while we talked about models, then we kind of talked about assistants, and then we really kind of transitioned to talking about agents. Agents are autonomous AI. That was a key theme of 2025, I guess. One first question, Chris. Did we actually. What did we actually do with AI agents in 2025 overall? Was it a positive, positive and. Or successful year of trying agents?

2:41

Speaker C

Well, I think. I think there was, like, untold levels of hype around agents, as there always is every time we hit a new thing. And I think a lot of. I think a lot of organizations did try to dip their toe into it. Now I've seen, like, you know, as we're reading all the things that are out there, I've seen some crazy things like, like, from, like, nobody, you know, successfully using them all the way to, like, 70% of all existing organizations are now using AI agents, which I'm like, totally, like, BS, you know, which is not.

3:58

Speaker B

Not true at all.

4:32

Speaker C

Not true at all. It's a long way from the truth. But I do think a lot of organizations are kind of whiplashed right now and kind of going, holy cow, what's this agent thing? I'm reading about it everywhere, and we're trying to figure out what to do. And that's happening at a moment of like, you know, where the, like, those who are diving in and finding a use case that they can find success with, which is not easy in all cases, are making some big wows within their little world. And then those who aren't are still kind of fumbling in the dark. And I think that's fair. That doesn't mean that one person is smarter than the other. It just means that looking into the right use case and getting the right people to address it and having a good business case for it makes a lot of difference. And, you know, as we talk about that, I think one. One way of kind of leaping into that, you know, dichotomy is as Andre Karpathy, you know, put out a post on a tweet on X. Do we still call them tweets? I don't. I don't.

4:33

Speaker B

I have no idea.

5:43

Speaker C

I. I'm not sure. But on. On X, and I won't read the whole thing because it was a fairly long one. But he's basically acknowledging. I mean, you're talking about. This is One of the world's preeminent AI researchers that, you know, the, you know, within our little AI bubble world, you know, on the technical side, he is a superstar in every possible way. And he's kind of saying holy cow. And I'm totally paraphrasing, he didn't actually say holy cow. He's kind of saying holy cow. Even I at moments are feeling a bit left behind with how fast this is changing. And in the context he's kind of talking about like coding and stuff is that after leading in the last few years and seeing models, it seems quaint to talk about models, as you pointed out now, but talking about these models that are getting better and better steadily, but they still weren't doing great coding in terms of that and the need for senior engineers to kind of correct it. And was it more trouble to use the model and the agent to do the coding or not? And did you, did it cause more, did you just spend more time fixing errors? Well, all that really changed at the end of 2025. And you know, with, with Opus 4.5 and OpenAI's 5.2 model in particular, as well as several others, but those are the ones that are called out the most. Like they got to where they could do senior level coding really well without mistakes. And, and I've griped because I'm a Rust programmer that like, because that's such a small community of programmers overall that the models weren't as good as they would be in like Python and JavaScript. Well, guess what? It's kicking butt in Rust now. And so like as no longer rusty. It's no longer rusty. And so like I for one, as I'm, as I am upskilling. And as someone who has been using AI, as we have gone forward in coding, like it's now like my workflow has changed dramatically in the last two months in terms of understanding how to effectively use coding agents to do that. And I think we're, we're. I don't think coding is the only area that that's impacting. I think there's a lot of areas where agentic AI, once you get a use case that is giving you some sense of success, is like changing the field, that small field that you're playing on in that. And that might be happening many times over. What do you think about.

5:44

Speaker B

Yeah, it's interesting just to read a little bit of that tweet that you reference Karpathy mentions. Clearly some powerful alien tool was handed around, except it comes with no manual and everyone has to figure out how to hold it and operate it while the resulting magnitude 9 earthquake is rocking the profession. And he kind of ends saying, roll up your sleeves to, to not fall behind. So, yeah, I, I definitely have felt this, Chris. Just from, you know, my perspective, we get to, you know, kind of wax poetic on these episodes where it's just you and I. But from my perspective in building a company over this past year, prediction guard. I'm reflecting on our last, our last board meeting and the reflection back to, to us as leadership was wow, like essentially making the note that product wise, you all were able to advance so much more quickly without expanding your team in those last two quarters of the year than like as the company has progressed. So these are not, these are not, you know, these are obviously investors. You know, not that they have no technology, but they're, they're not coders. But just from the output, right. The, the pace of development of the product and what we're able to achieve is significant. It's significant enough to be noticed in that way without the larger team that would typically have been kind of required to, you know, reach that, you know, scale or support of what we're, you know, what we're supporting.

8:18

Speaker C

Yeah. I want to relay a moment that I had and as, you know, like I sent you a text over the holidays saying holy cow, we got to talk about this at the beginning of the year and stuff. And I want to, I want to share with you now because you. I did not relay kind of like what happened to me that made me send that text because it's very relevant to. This is I had been proposing a really complicated autonomy based project for work and I'm not going to get into specifics on what that is, but I, and like to take out the hype for a moment, I had spent a lot of time thinking and researching all the different things that had to go into that. And it was a tremendous amount of complexity involved in that and developed a really complex and very detailed and specific prompt on how to get there at like a production quality where it wasn't like what we would have talked about a year ago where it was like AI slop code coming out. And so I finally got to this point where I tried that out and, and like it in the matter of six minutes it produced what I would have at least six weeks of work, at least six weeks of work in just a matter of a handful of minutes. Now I had knew what I needed to put in. I knew a lot of that stuff and I was able to get a really good prompt going. But the actual work, like suddenly I had a large project laid out in VS code that had all these different things tied together. And I was just, I just literally like, I don't think I've ever had that big of an aha moment in coding. And that was. And I was like, gotta tell Dan about this. So that was. I just wanted to. That was what prompted it. And it made me that like I flipped over and realized this is the way forward and I'm 100% in.

10:10

Speaker B

So just to pick apart a little bit of what you said, Chris, there's some highlights there that I think are takeaways from our agentic work in 2025. One of those is with, I would say, no doubt at this point, these agentic workflows, especially driven by folks who have the relevant domain knowledge, are transformative in ways that are legitimately transformative, multiplicative, however you want to say that very much. I think we can confirm that. However, I think one of the things you highlighted is some of what was highlighted throughout the year around, like the MIT study of things, you know, failing. Gartner says, you know, 11% of organizations have agentic AI in production and that 40% of projects will fail by 2027. I think part of that is driven maybe by two things that we've seen over this year. One is you do actually need to have a certain level of expertise to know both how to prompt and configure these systems, but also what data sources to connect into them, how to utilize this sort of, as Karpathy puts us, this alien tool, how to hold it, how to add in, you know, an MCP server, what type of automation am I really doing? How should it run? How do I integrate it into my day to day workflow? If you have that expertise around the integration side and infusing the domain knowledge, that is a key piece of it. And without that there can be a lot of failure. Secondly, I think sometimes people are just trying to automate processes that are problematic because they're bad processes, not because the automation is bad, but they're just bad processes to begin with. So, you know, AI doesn't solve problems. That problem.

12:07

Speaker C

Yeah. There was one other takeaway that I'll throw in before we move on from this topic, and that is prior to developing the, I can't call it a stub for the thing because it was too much code, but prior to that kind of final prompt that got me well into the project, there had been literally many hundreds of prompts before that, which got me ready for that. And I think a key thing that I came away from that with and which I've been sharing with other people since over the last few months, is that that expertise is important for how you shape prompts to get the thing you need and you learn from it. So like at no point was the AI leaving me behind, it would open up new doors, but I had to walk through those doors, take the learnings and develop the next prompt from it. And I think to your point a moment ago about kind of that expertise is it took that combination of domain expertise with how you prompt your way through a long sequence of prompts to finally get to the point where like you understood the system well enough to where you could describe it in a prompt well enough so that a sophisticated agentic model could put the whole thing together in like nearly a production ready mode. So there was, there was a lot of learning involved in that. So it wasn't just magic in five minutes. I was just quite taken by the, by the, by having gone through that long process, being able to do that final prompt and have so much produced that was at the quality level that I would have demanded it be.

14:07

Speaker D

You know, for most developers you've had this call, marketing calls, sales calls, and they want a new landing page, they want to redirect, they want designs implemented. And of course engineering says, yeah, we'll get to it. But that bottleneck is why thousands of businesses from early stage startups to Fortune 500s are choosing to build their websites in Framer where changes take minutes instead of days. So our friends at Framer, they're an enterprise grade no code website builder that gives designers and marketers the ability to fully own your.com without having to rely on the engineering team. It works like your team's favorite design tool with real time collaboration, a robust CMS with everything you need for great SEO and advanced analytics that include integrated AB testing. Changes to your Framer site go live to the web in seconds with one click. Publish without help from engineering. That's priceless. That keeps you on task, on target, delivering features and that's how your team reduces dependencies and reaches escape velocity. And this isn't a toy. Framer is an enterprise solution with premium hosting, enterprise grade security and 99.99 uptime. SLAs companies like Perplexity, Miro and Mixpanel trust Framer for their websites. Whether you want to launch a new site, test a few landing pages or migrate your full.com framer has programs for Startups, scale ups and large enterprises to make going from an idea to live site as easy as and as fast as possible. Okay, so the next step is to learn how you can get more out of your.com from a framer specialist or get started building for free today@framer.com PracticalAI for 30% off a Framer Pro annual plan. That's framer.com PracticalAI for thirty percent off framer.com PracticalAI rules and restrictions that may apply.

15:51

Speaker B

Chris I think the other, or at least one other theme that I know that we highlighted kind of going into this year was multimodal AI. I don't think, unless I'm misremembering or not, not seeing the right transcript that we predicted kind of this reasoning era with models. So like, if we just look at the progression of models, which is definitely not the whole picture as we just talked about, a lot of what happened was around agents, which for those that are listening, maybe, you know, parsing through these terms, an agent is not just a model, it is a model that is connected to various external systems, some of which could be AI, some of which could not be AI, and actually interacts with those systems to accomplish a goal. But so we're not just talking about models anymore, we're talking about these systems. But in terms of the models, I think we predicted more multimodality kind of coming into this year, which certainly we have. Right. There have been many different vision language models, video models, music models, all sorts of things, Sora, all of these things that we've, we've seen over 20, 25. And then there's these other reasoning models, starting with the multimodal ones. Chris, I think at least where I'm sitting, and it could be in just in my role or the types of things that I'm seeing, but in the majority, actually, I think yes. So I guess I would say all of the customer interactions that we are having and the people that I'm talking to really are using multimodal AI. In terms of multimodal on the input side, I still very much do not interact with people that are doing kind of multimodal on the output side. So what I mean by that is that's fair. Certainly I see videos coming out of SORA as reels on, you know, social media. And so I know that that is happening. Right. But in terms of the business world, real business context, I definitely see, you know, video, audio, image and text going into models, but not so much coming out really still coming out is either text or some form of text, like some structure, like a JSON structure, a tool call, some template, some field, some whatever it is not really multimodal output. Maybe the exception to that might be synthesized speech, which is pretty pervasive everywhere as a thing in and of itself. So that's maybe a standout.

17:51

Speaker C

No, I think you're right, that is a standout. But I would have thought of that as a single mode on the output side. And I think you're calling out, I think that there is a big opportunity here for, especially when you combine it with agents in different capacities to have a richer output experience. Because at the end of the day, I mean I know, I know that like whereas my non AI industry family members are more taken with the videos and things like that for entertainment, but with the work that I usually do, it's more that text output. And I could imagine a much richer output experience. To your point, it's easy to envision especially when you think about, you know, like I'm going to dump all the different things into my input that I once it won't want it to process and assess for the output, but the output's still pretty basic. It might be that real time voice interaction that was so hot six months ago, you know, that everybody got into and then it kind of kind of passed. And you know, we all get our expectations set very, you know, like, oh, okay, that's just real time voice, no problem. But if you were to put a bunch of things together on output where you're getting text, you're getting that real time voice in the conversational sense, you're getting supporting media, I, I think it could, I think it could level up. So it probably will.

20:50

Speaker B

Yeah. And I guess maybe one of the themes from this last year that we can take away as well is this, is this rising up of the reasoning era. So reasoning models, we've talked about these on the show. Just as a reminder for people in case you missed out on our discussions for this year. In some ways these reasoning models are mislabeled because they don't reason about anything, they just produce text. What is interesting is that they produce a segment of text that imitates or mimics reasoning or a chain of thought. Right? And kind of quote unquote thinks through a problem by generating text representing that thinking through of the problem and then they generate a final answer which has proven to kind of help pick through more complicated tasks. Maybe do you know, more orchestration or dynamic type of workflows than what we were seeing before. And these models, I Would say the many of the models that I see being released now, at least in terms of that LLM flavor of models, are either straight up reasoning models, which means they're always going to reason in this way. They're going to generate the reasoning tokens and then output the regular tokens, or they are kind of conditionally reasoning models or hybrid models that will do that some of the time and not other of the time. And there's various implications of that. Certainly I think you see that driving certain of these agentic, you know, working towards these agentic workflows. It also, to be honest, is sometimes annoying because often, like in real business applications, like we're working on you really, if you dial in your workflow, you really don't want those reasoning tokens because they take so dang long. Right? You have to wait. There's so much latency introduced by waiting for these reasoning tokens that once, unless you're doing this sort of very, very dynamic workflow, it's kind of annoying that a lot of these later models have these. I would say, in general it's a good thing. So we're definitely in the reasoning era and it's been cool to see these models come out, but you don't get anything for free. There's a lot of latency that's developed. And because these models stream output, right. Every token that is generated is a inference run of the model. Meaning if you're generating 2,000 tokens of reasoning, that's 2,000 runs of the model that is operating on a computer with a GPU that is expensive somewhere. Right?

22:13

Speaker C

You know, I agree with all that and, but I think to some degree it's intentionally or unintentionally, and probably the former, rather than the latter being driven by the organizations hosting these models. Because, you know, as one example that most, you know, that everybody would know is chatgpt. You go in and you have a choice. You know, if I'm looking at their web interface, you have a choice between kind of instant or thinking. And of course, everybody wants thinking. Do you really want instant when you can have thinking? You know, and then if you choose thinking, then it's just standard thinking or extended thinking. And so like, that plays to a human bias of like, like you're gonna go for. Well, yeah, I wanted thinking and I wanted extended thinking. And to your point, like, the cost of that maybe may or may not be to you as the consumer, but certainly the cost of producing that is much more expensive with extended thinking on that, which kind of points out Another thing that's happened in, over the last few months that we've all heard about, and that is that for years we talked about the limitation of having enough GPUs being the limiting factor on moving forward. And now it is power, because you can take the same GPU and use it for many inferences, but each one of those separate inferences is, is taking a certain amount of power for that. So as a consumer of that, every prompt that I choose to make in a quote unquote reasoning fashion is going to be much more expensive in terms of power consumption. And we're hearing that in the news all the time these days.

25:09

Speaker B

Yeah, I guess that takes us to an interesting theme that we've seen develop around infrastructure, hardware, energy. It's interesting to see that so much of this discussion, as you've mentioned in recent trends, and I think this will continue into 2026 and it will create some, both friction and opportunity and interesting dynamics in 2026, which is this limitation and opportunity around power. Just, you know, a couple of things. Anecdotally, it's, it's like, so I have, I went to Colorado School of Mines as my undergrad, which as the name indicates, still has a big tie to mining and petroleum and other things. And so I have friends in the energy industry and was talking with some of them how there's now very much speculators going around and trying to purchase and get the rights to power plants that were, were relatively newly constructed but decommissioned while, you know, people were moving away from, from coal. But speculating that, you know, these power plants will necessarily need to be turned back on. Right. And, and you know, other anecdotes like in our, in our town here, Lafayette, West Lafayette, there's this huge, like, I forget, number of billions of dollars investment in a chip, chip assembly plant here on the West Lafayette side. What's interesting to see all of the community back and forth to get the zoning approvals and the backlash that is happening against this chip assembly plan. And I'm not saying on one side or the other of that, but I, what I think is interesting is you see that dynamic here, right? In China, if you want, if you want to dominate in the AI space and you need a bunch of power plants, right? No city is going to say, no, we're not going to have our power plant here. They're just going to put a power plant there, right? And so that's how this has then filtered into this geopolitical space and environment that we're in, where power and AI and chip manufacture and onshoring. All of this is what's driving now the political conversations. And so yeah, we've seen this trend from just having access to GPUs all the way kind of flow to these discussions around energy, infrastructure, power, which I'm sure will just continue throughout 2026.

26:51

Speaker C

And it's, and to delicately point at geopolitics and the implications, you know, some countries are now invading other countries and taking their oil and you know, that's, you know, regardless of which side you're on, like that was a notion that was kind of inconceivable. But power is, is the thing that people are talking about because every nation with its drive for more and more power consumption to support not only its normal things but, but AI growth, as is the United States, you see a lot of, a lot of interesting things happening there. So I'll just leave that one right there.

29:44

Speaker B

I think, like to your point, this isn't a, this isn't a political show. We're talking about the practicalities of AI. But I think in thinking about the trends of 2025 into 2026, you can't go into 2026 without noting that when things happen politically across the world, AI is being mentioned as a motivation for why these things are happening. Regardless again of who's doing right or wrong or your stance on something. We've moved from I think at the end of 2024 to now, the end of 2025 going into 2026, where AI is the topic that is driving some of those policy decisions versus I think last year, if I was to kind of summarize, we were talking a lot about, well, how is, how might AI or how might governments regulate AI as a kind of piece of their policy. Now it's almost driving the key pieces of policy in a lot of ways.

30:23

Speaker C

Indeed, it's, I guess going forward it will be interesting as we go through 26 and see how policy continues to evolve in this because this is a level of consumption that you know, obviously is, is becoming a challenge to maintain and to even to initiate because it's not stopping with where we're at. It's going on. So infrastructure, hardware, energy, those topics. It should be a volatile year in 26 to see where things go.

31:33

Speaker B

Well, Chris, there's of course many, many things that, that have happened in 2025 and the majority of those we've talked about so far are related to Gen AI. I think in terms of practicality moving into 2026, we would not be practical AI. I think if we didn't highlight the fact that, you know, we, we recorded another episode right prior to this and I won't give away anything that's in that episode other than there was one statement that said, hey, you know, one trend that's happening with AI models is that gen AI models have sort of plateaued on this transformer architecture that most all of these models are based on, but predictive models still continue to advance, you know, in a, in a quite rapid pace. And what I mean by that for, for those listeners, again that kind of parsing through this jargon is these generative AI models like large language models, language vision models, etc. Generate tokens or certain output like images or other things, other models are discriminative or statistical and make predictions of classes or forecasts or those sorts of things. And the reality is that across industry these models still continue to provide amazing ROI and get better and better and the tooling actually gets better around those. And actually if you what's interesting to me Chris, is years ago we talked about kind of this idea of automl which is still a term that people use. There's still some things out there related to that. This idea that we could maybe automate the parameterization of AI or statistical models and that would kind of help us create these models better and faster. I think the reality which is kind of interesting is everyone is talking about Genai now, but there is actually this realization of maybe a better automl or maybe a better way to put it is augmented analytics or augmented ML or something like that, where actually you have these highly capable tools under the hood, whether that's SQL queries, to non generative AI models, to forecasting models, to data science tools that now can actually be tied in as tools into a generative AI model that orchestrates amongst all of those and reasons over how to use those. So for example, I could have my E commerce data in a SQL database, I could have a tool that uses Facebook profit to do time series forecasting and then a generative AI model that can call those tools to pull the right data out of my SQL database, format it in a way maybe with a generated code that's executed, send it to my time series modeling tool which is good at time series modeling and then gets me my forecast for 2026 for sales or something like that. So actually I think it's interesting that all the discussion is really about that orchestrator model and not about these other things because actually it's those things that are plugged into the orchestrator that are actually creating the real multiplicative effect, the power of that system.

32:08

Speaker C

I totally agree. And I think that will only get amplified, you know, as you go into kind of more of a physical AI future. You know, we've talked a lot about that in recent episodes, especially late in this past year, that as you have these orchestrators with the tooling around them with predictive models that are now kind of enabled through agentic systems, there is so much capability out there that I don't think the public is really as aware of. They may see drones and robots, but they haven't really, in my experience, thought through what it takes for those things to come about. And so, you know, there's definitely a place for Gen AI in those, in terms of those interactions that you're having with the human and in terms of how the human and the physical agent driven platform are interacting. But kind of back to your point about predictive, you know, predictive are going up and up and I think, I think one kind of newsworthy event which kind of illustrates that is the fact that one of the, what they refer to as one of the three godfathers of AI, which is of course Yann Lacun, has left Meta, otherwise known as Facebook, to people where for about roughly a decade he was there maybe a little bit longer, but part of his tenure there were, was to. To. He had kind of the academic freedom to move forward. And one of the things that he is talked about for quite some time is the fact that Transformers had a limited ceiling. And I know we both, we have, we've had those discussions lately about, you know, the limitations of Gen AI, but as he looks at the notion as along with a lot of other people in the AI industry about world models driving things forward, I think your, your predictive capabilities mixed with your agentic will really drive a lot of the, not only the capabilities that you just talked about with the tooling, but also things in the physical AI space. And so we may see a bit of a renaissance in those spaces going forward as people start kind of going, I've had enough of Genai. It's really awesome for what it does, but I can now finally see its limitations and ceiling. So I'm interested in whether the upcoming year will kind of turn attention in that direction.

35:47

Speaker B

Yeah, I would say, I guess sometimes in these episodes at the beginning of the year we make predictions. I think in relation to all of what we just talked about. One of my predictions for 2026 would be that those practitioners that have the capability and knowledge to build MCP servers to connect tools that can be orchestrated to models and to actually architect that agentic system regardless of model. So I think like that is a wildly powerful combination. As we kind of started this conversation, we were talking about how that is part of how to get your agentic, you know, pilots and all of those things to not fail. So actually I think these, if, if there are data scientists, software developers, etc. Out there that are listening now, take it. I'm always wrong at predicting the future. So, you know, don't trust me too, too much. But at least my own personal intuition is that focusing on, I don't even know what the term is that we'll use for this in, in 2026, maybe it's AI engineer or whatever, but I think whatever that role will shape into it will be data scientists, software developers, whoever it is, who are able to come in and actually know how to spin up a system of services that are MCP servers, that are databases, that are RAG systems, and then connect those things into an orchestration layer such that they can be used. I think that is shaping into a highly valuable role and something that I think will survive for some time because at least the way I would see it, those things that need connected in are so complicated across the enterprise that it's going to take a very long time for that skill of kind of integration, AI integration and, and tool development and tool integration to go away in any sort of meaningful way.

38:16

Speaker C

I, I a hundred percent agree with that. And, and I think, I think possibly the secret sauce on trying to put that together as a human is, is I'm going to go back and reference my little, my little experience I shared in the beginning and that is to learn how to use the tools that you have now well enough to create a workflow that allows you to leverage those tools through prompts to get all of those systems up and running. So it's not all on your shoulders as a human. You're the human at the center of a great symphony of AI agents and you have to learn to conduct those agents in that symphony to produce way more than you could have ever done last year. And I think, I think that's a doable thing, but it's a discrete skill set and it takes a lot of flexibility and thinking and moving out of your domain of comfort to do that. So, like be super willing to try very, very uncomfortable things.

40:46

Speaker B

Yeah.

41:49

Speaker C

So, but I think that's a safe, I think that's a fantastic path forward.

41:50

Speaker B

Yeah. And especially if you can drive those things to be even more sort of niche or verticalized for those out there trying to like start companies and that sort of thing. I think if there's a particular tool set within an industry that has not yet and can be tied into this level of orchestration and is and is necessarily complex, whether that's in manufacturing or in finance or whatever it is, and you have that domain expertise, there is a definitely a window of time where not creating a model that is able, a single model that is able to do all of that thinking, but being able to architect those tools into a system is going to be really, really powerful. But Chris, we're kind of coming to the end of our discussion going into 2026. I'm wondering if, if you have any thoughts on, on what we'll see in 2026. Are we going to see, are we going to see quantum computing tied in with AI? Are we going to, you know, what's going to happen?

41:54

Speaker C

So, so on that one point, I don't think we're at quantum being a highly productive thing yet. And I follow quantum a fair amount. But, but that. So I don't think we're quite there yet. And I think that's common. People say you're, you're always 10 years out or whatever that is, but we're still not there yet from seeing a fair amount of practical work on it. I'll tell you what I think is, is going to change in, in this coming year and that is as we are migrating into the era of physical AI and having various types of platforms operating around us through agentic systems with lots of models, both large and small participating in those, the cost of the average person being able to get in there. It used to be prohibitively expensive to do that and you had organizations, they would drive those efforts. But the maker world is really starting to see that as a possibility because GPUs and ASICs, which are application specific integrated circuits and such, are able to start producing AI capability going forward at a much cheaper dollar. And those are embeddable on smaller devices that you and your children will go to the store and buy. And you'll be able to implement things that just a year ago were unimaginable. They would have been far outside the family budget. And so it's no longer a commercial only interest or an industrial or military grade interest. It's now something consumers have access to. And I think that as new toys develop that are built on this and are teaching kids that that opens up an entirely new world of capability around your house. And that you'll see consumer electronics reflect this in much less expensive things. Instead of just having potentially a robot vacuum, you may have many little robot that are very task specific things coming into your life. And if you're not finding the thing at your local store or online, then you just go build it yourself with your maker kits. Because that is becoming a real thing, it's becoming doable. So my prediction is we see the very beginning of the AI maker era come about at a consumer level.

43:09

Speaker B

Cool. I'm excited for it. I definitely. It makes me think of, I see all the news about CES recently, lots of talk about robotics there, which is, which is interesting. So my kind of set of predictions are I think a couple fold. One of those which we've talked about here before and I think is consistent with what we're seeing is, you know, models have been quite commoditized. The increases in performance of frontier models has plateaued. Open source models have essentially caught up. And so really now we're at a stage where I think like that moat of having the best model is, you know, not, it's not the most relevant thing. The most relevant thing is, you know, flexibility, not getting lock in, the ability for you to use a bunch of different models, the ability for you to, you know, construct a system. I think also kind of tied to that point. My second thing that I'm thinking of is just how fragmented and complicated the ecosystem is getting. And I think that will carry on through 2026. We won't see, as you know, the full consolidation of that in 2026. And so I think what you'll see is all of these. So it's no longer about, I'm going to get the best model. And now my company has AI and I'm set for the future. That's actually the easiest thing. Like you have a model. So what I can get one on my phone, I can get one on my laptop. Doesn't mean anything. What is problematic is if you say, okay, well I want a system to do this now, I need all these tools, I need to connect them in a certain way. That becomes increasingly complicated. I need it to be compliant and work in a regulated industry that becomes increasingly compliant. I need to tie in this type of data or that type of data, more complexity. And so you're just seeing this expansion of complexity in these AI systems, not because the models are not capable, but because the model is actually no longer the blocking point of the whole thing or the single thing in the system. And so I think if you look at something like NIST 601, the standard that NIST put out of how to run secure AI. Right I did a little bit of mapping and it takes so I tried to build up to 100% compliant with NIST601 and Azure AI and by the I got up to nine different services that could get me 39% compliant with NIST601 in Azure, in Azure Cloud. And so you're already managing all of these different services and all of these different things becomes complicated. It becomes a lot of labor to do that. So I think some of the winners in this space are going to be those that come to that complexity and tell you, hey, well rather than spinning up 37 different things in Azure and hiring 10 people to manage it, here's a consult. A consolidated quick time to value way for you to get X or Y, whether that be a verticalized AI solution, a secure AI solution, whatever that might be. So those are my, those are my thoughts going into the into the new year.

45:37

Speaker C

Excellent, excellent guidance right there. For those who are not familiar with nist, I just want to point out that that is a US agency called the National Institute of Standards and Technology and they put out standards and the 600 was was one that Dan was referring to. So if you're outside the U.S. you can look that up. It's publicly available, but fantastic advice. Thank you for sharing that.

49:27

Speaker B

Yeah and looking forward to talking about all those things in in 2026, Chris. It's going to be a fun year for the podcast and new things in the works. And yeah, so thank you to our listeners for sticking with us another another year. We very much appreciate you appreciate sticking with us for so long. Also appreciate the new listeners that maybe this is your first episode that you're listening to. Welcome to the Family. Please find us on the various socials, LinkedIn, etc. And yeah, looking forward to continuing the conversation into 2026.

49:49

Speaker C

It'll be a wild ride as always.

50:28

Speaker A

Alright, that's our show for this week. If you haven't checked out our website, head to PracticalAI FM and be sure to connect with us on LinkedIn X or BlueSky. You'll see us posting insights related to the latest AI developments and we would love for you to join the conversation. Thanks to our partner Prediction Guard for providing operational support for the show. Check them out@prictionsguard.com also thanks to Breakmaster Cylinder for the Beats and to you for listening. That's all for now, but you'll hear from us again next week.

50:37