Deb Golden, Chief Innovation Officer at Deloitte, discusses the fundamental shift from deterministic to probabilistic systems in AI adoption, emphasizing the need for leaders to 'unlearn' traditional logic. The conversation explores the concept of becoming 'neural athletes' who can handle the cognitive load of constant synthesis between human judgment and AI capabilities, while highlighting the importance of vulnerability and empathy in leadership during this transformation.
- AI adoption requires unlearning deterministic thinking patterns and embracing probabilistic systems that learn continuously
- Modern AI work creates a new type of cognitive load requiring 'neural athlete' capabilities to synthesize between human judgment and machine logic
- Vulnerability becomes a strategic asset in AI-driven organizations as it's the one thing that cannot be easily simulated
- True AI transformation focuses on creating new business models rather than just automating existing processes
- Anti-fragility - learning from failure to become stronger - is essential for navigating AI implementation successfully
"We aren't just tired, perhaps we're becoming cognitively brittle. And again, to me, in order to be the highest effective elite and neural athlete that you can, you really have to think about not just increasing utilization, but how do you manage cognitive energy."
"Vulnerability could be your greatest asset. It's candidly right now the only thing that I can't simulate."
"If your AI strategy is based on single interactions, you're probably not building for the future. It's just a faster encyclopedia."
"The true unlock is going to be when we are creating net new business models, net new competitive advantage, net new ways of looking at the world collectively with an AI model."
"You're not just context switching on tasks, you're actually almost switching on states of reality. You're moving from creator to judge, from empathy to data analytics, and you're doing it in a very rapid pace."
Foreign. Where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work and create. Our goal is to help make AI technology practical, productive and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn X or Bluesky to stay up to date with episode drops behind the scenes and AI insights. You can learn more at PracticalAI FM. Now onto the show.
0:00
Welcome to another episode of the Practical AI Podcast. This is Daniel Whitenack. I am CEO at Prediction Guard and I'm joined as always by my co host Chris Benson, who is a principal AI research engineer at Lockheed Martin. How you doing Chris?
0:48
Hey, doing very well today, Daniel. How's it going?
1:03
It's going good. It seems like the weeks are all frantic in 2026 and having a lot of AI agents help me throughout each of my days to get through it, it seems like and you know, definitely makes me think about my own human role in my day to day work. And excited to kind of dig into some of those topics and others with our guest today, Deb golden, who is Chief Innovation Officer at Deloitte. Welcome Deb.
1:05
Thank you. Thank you so much for having me. I appreciate both of you.
1:36
Yeah, it's great to have you with us and maybe just to start out at kind of a general introductory way for those out there that maybe, I mean have heard the name Deloitte. Could you just give us a quick introduction maybe in specific how like Deloitte is involved with AI work, as is the topic of this podcast, and then maybe how the Chief Innovation Officer or your current role kind of fits into that. That would be great.
1:39
I mean excellent. I mean at the highest level and to be super quick and brief on this topic, but certainly, I mean Deloitte is not just a service provider, but I would say a major global industrial architect, if you will, particularly of our current AI era. And so, you know, as I think about our multidisciplinary approach and whether that's everything from audit and tax to consulting advisory services, you think about how all of these combined really help to rebuild the foundational pipes, if you will, of not just our own global enterprise, but any global enterprise and or individual enterprise looking to make the AI native operations actually work. And so again whether that's from advising on technology to actually rebuilding the foundation, we are engaged in the soup to nuts associated with that. So it really is quite not just interesting to see how We've evolved our hundreds of years background into being in this AI era, but also looking at how each of those disciplines not only have an impact on the world individually, but also cohesively across the board. So when you think about audit and tax impacting, R and D as an example is a huge impact, particularly in our tech forward way. So you know, it is actually one of the things that's also made me when I think about the ability to traverse lots of different opportunities across multi industries. We operate in every single industry from commercial landscape to the government and public services sector landscape, to different types of technology. As you think about again, whether it's advise and implement to operate and, or products plus and the commercialization of products and optimization. So it really is the gamut which can be complicated at times or complex, but it also actually speaks to some of the complexity I think that we see in the world today, in the world around us and specific to AI. And we've created a quite substantive, significant investment around AI and it's been ongoing for many, many years now as we think about how we can shape not just the AI world, but the other parts of that world that have an impact. And again, whether that's products, whether that's clients, whether that's our own internal operations, or how we look at addressing the client marketplace, I'm wondering.
2:08
That was a great intro to Deloitte and I appreciate that kind of level set for folks that aren't familiar with it. One of the things that's really interesting to me personally is I know you have a really unique background and you bring a very unique perspective to the role of innovation in your organization and wondering if you can kind of tell your own story a little bit about kind of like, kind of how you got into the position. And one of those things that is curious for me is a lot of times in large organizations, I'm a large organization myself, you don't tend to have the most innovative people rising to the top and stuff and putting their mark on the kinds of work they want to do. And so I'm kind of curious a little bit about like how you beat the odds on that to get there and to be able to kind bring your own personal brand to the types of work that you love to do.
4:30
Well, thank you for, you know, all the questions today, but, but certainly that one because I do think it's the foundation of, of how I approach things. And so I think if you were to look at my resume, you'll see the, what you'll see the decades of navigating high stakes risk, leading massive transformations, and candidly managing the systems that keep the world turning right in a variety of different ways. And I say that because I've been at deloitte now for plus 30 some odd years. I did work somewhere else for 2 years before that. And there's just, you know, there is definitely a through line high stakes risk in terms of being calculated in that capacity. You know, looking at massive transformations and everything from digital cloud native to back in the ERP days to even financial and finance transformation, while at the same time thinking about how to manage these systems that again turn the world. I think what you won't see, and to get to the heart of your question, is the part that really actually matters. Not just for the future that we're building, but candidly, how I operate, which is the how I've spent years, what I like to call unlearning the very logic that makes me successful. I spend a lot of time understanding everyone else's business to discern how to best try and help solve for the problems I love to solve incredibly complex situations and issues and ultimately, at the end of the day, it's my inquisitive nature. I learn by asking questions, I learn by trying to understand that the logic that helped me yesterday isn't going to be the logic that necessarily helps me today. And I'm not afraid to actually change that logic. But in order to do that, I learned by asking the questions, what do you know, how do you know it, what is the pieces that you may know? And by looking at it in not having to be in a role or having a responsibility, it could be I'm learning the most from an anthropologist because anthropologists know how to solve really hard problems. What can I take away from that situation to then perhaps have me see the world in a very different way? When I think about whether it's an AI strategy, whether it's an innovation strategy, whether it's cyber, and by the way, it's not just strategy, it goes to how we do implementation and execution. Those things candidly aren't found in a spreadsheet. They're really found in empathy and judgment, particularly when systems fail. And so my secret sauce, if you will, is based on my own way of thinking. But it's not because it's my way or the highway. It's not because my way is right or better than someone else's. I've just spent a life journey, candidly of trying to understand how my brain works and then based on how my brain works. How can I take those things that might be intuitive to me or intuitive to others and hone those skills? And so a lot of my life story is based on personal triumph. I lost my mother at a fairly early age. And then I too had some very severe life threatening situations where I almost died. And so when you take those foundations, I didn't have a choice to sit there and feel sorry for myself. I didn't have a choice to figure out how do I solve for these problems. I needed to solve for these problems. And it's not about not dealing with the emotion of the moment, but I did candidly have to separate some of the emotion because I was so emotional about the situation to actually see things very black and whitely. And it's the reason why, candidly, I'm very good in what I would say, crisis scenarios or solving for problems. Because what I find at least is a lot of people who are trying to solve problems are afraid to make drastic change, whether it be because they built the systems that need the change, whether because there's an ego tied to those systems that have changed or need changing, or simply because they can't see another way forward. And I think just because of the life experiences that I've had, I both don't look for empathy or sympathy, but on the same time, I've just honed the way that my brain looks at really hard problems. And I hope at the impetus of all that is that no child should ever see their mother die. And if the way that I ask questions and the way that I build solutions and help other leaders come along enables that, then I will have been successful in my own life journey.
5:26
Yeah. Thank you so much for sharing that, Deb. It's so inspiring and helps us kind of understand the framing through which you view innovation. And while you were, while you were talking, I was definitely thinking that obviously AI can impact real human lives and is impacting real human lives, but also it's one of those things that is very tied up in people's experience and emotion, whether it's like the board who really has to see AI transform their company, or the CISO who is terrified of new liability, or the engineer who is afraid of, you know, losing, you know, the core part of their, their job. I guess as you've seen, this kind of emotion and this like almost these crisis level things emerge around people's day to day work. How do you, how do you frame that particular kind of set of emotions and change from your perspective? Is this because I think this is also Kind of exacerbated by the hype around AI. Right. And people not knowing what's real and what's maybe not real, but yeah, for those that you're engaging with, how do you bring a level of real kind of understanding to where folks in an organization are coming from and what actually needs to change in the organization with respect to this adoption of AI?
9:55
Yeah, I mean, I think one of the most fundamental things now there's an easier way to go about it and then there's the hard work that actually needs to be done to change the foundation. And I'm going to talk about the latter first because I actually think sometimes in the race to adopt AI we lose sight of actually what is the true hard work that needs to be done for actual adoption. And what I mean by that is I hear a lot of times people saying, well, we've been in this AI world before with cyber or digital or cloud, and actually realistically we haven't been. Those worlds were built on a very deterministic system. So O's and ones, you're building systems, people, processes based on an if then statement, something that is very expected to understand the outcome. So if this happens, then this will happen and that is the expected outcome. In a AI driven world, we don't actually have that. We have a very probabilistic system that's actually learning as it goes. And so when you think about the work towards adoption, there is actually some hard work that needs to be done on the underlying. Again I say systems, but let's assume that, I mean that broadly, not just physical systems, it could be logical systems, it could be people or a process is actually again, how do you get them collectively to unlearn what they think they know? Because if you're building an AI system on top of a deterministic if then statement, it's already going to be bound to fail. And what I talk to a lot of leaders about is again this race to AI adoption where funny enough, like speed has now become the net new metric. But that's only important if you actually understand your baseline. And most people don't actually understand their baseline. So when you're like, well, how fast are you getting to adoption? Well, did you know what some other metric was before? That is question A. And then B, when you see individuals struggling with well, we built this quote unquote perfect AI model in a perfect sandbox and we don't know why it doesn't work or why people aren't adopting it. Again, it's predominantly and probably built on the fact that it was built with the same playbook and the same logic that got people to an O's and ones world. Which again is not. It's just not the world that we live in. And again, I've heard people talk about this like we need to have all the guardrails possible for an AI world. I'm like, okay, well the AI is going to outlearn you in 60 seconds, if not faster than that. So having this finite list of playbooks and guardrails and guidelines is going to be really difficult in that scenario. And that's a perfect example of unlearning what you think you know, to make this technology the best it could possibly be to enhance the world and what we have. And, and it's an, it's an and, right? It's not necessarily an or. Like there's things you can do to keep moving the things along. And I don't say that because like the systems work that's necessary is going to take years to go do. Maybe ask a flip question. Maybe the question is do I even need this system or process at all? We seem to be utilizing AI to automate the things that currently exist. Operational efficiency is a table stakes. It has to happen. The true unlock is going to be when we are creating net new business models, net new competitive advantage, net new ways of looking at the world collectively with an AI model. Not how do I utilize AI to turn a deterministic system into being better at what it did previously. Again, I just, I think that's a little bit of complacency, but it's also a little bit of ease. That's, that's what people know and it makes people really nervous when you have to look at, well, I might have to pivot my thinking and my way of being in order to get to the highest probable answer.
11:31
So your team is deploying models that generate code, write entire documents and you're automating workflows that you stay weeks and days now, but updating a headline on your marketing website, that's a three day ticket somehow in the backlog. There's something deeply ironic about an AI company, which is pretty much every company now that can ship a model to production in hours, but they can't push a landing page without a deploy cycle. And yet that's most companies right now. Framer fixes this and it's not even close. It's a website builder that works like your team's favorite design tool. Real time collaboration, a CMS built for SEO integrated A B testing. Your designers and your marketers own the entire.com from day one. Changes go live in seconds. You got one click publish. No engineer required. Your team reduces dependencies and hits escape velocity. And before you think, no code. Cute Perplexity Miro Mixpanel all of them, all their marketing teams are all running on Framer Enterprise grade security premium hosting 99.99 up to time SLA. The infrastructure is serious. The workflow just removes the bottleneck. Learn how you can get more out of your.com from a framer specialist or get started building for free today@framer.com PracticalAI for 30% off a Framer Pro annual plan. That's framer.com PracticalAI for thirty percent off framer.com PracticalAI rules and restrictions may apply.
15:32
So Deb, I was wanting to kind of follow up on that last one. You really got me thinking with that answer. And I also want to bring a little bit of kind of one of the things that I know that you are very strong believer in is empathy and kind of bringing that human element. And we're at this moment that you were describing with leaders where they are, they are struggling. There's a new way of thinking to get them moving forward from where they've been. As you've mentioned earlier in this discussion, the logic of today and yesterday may not be the logic you need for tomorrow. And kind of built into this, there is this fear within leaders in various organizations where they're worried about the all the potential bad outcomes that they're thinking about and the guardrails that they like to put around those things. You know, that that really beckons back to kind of human fear, human vulnerability. And what how do you see the role of kind of acknowledging that vulnerability that people have in this capacity and the empathy that might be applied there to, to make really hard decisions, to go a different path from what they have been led to believe, what they've learned through their career and what they've always done up until this moment. How would you guide them through that process? How would you suggest that they take that next step?
17:03
Yeah, and if I may, maybe I'll provide a little bit of sentiment in advance of that to answer the question, which is sure, I think on paper we appear more connected than ever if you think about all the ways that we can connect. But on a human level, people feel invisible. And I think this is paramount. When you think about, we had a period where we were all at the office. Then you have a period where no one was at the office. And now we have a hybrid return to work, no matter how you define that. Because I don't want to get hung up in what does that definition mean or not mean. But I'll use this analogy as an example. We have some simulation of hybrid in an office, and yet we've never, in most situations, we've never changed the semantics of what an office means. When you think about that, even if you're telling people to go do these things, it's really easy to very quickly say, not only do I feel invisible, I'm not sure how to communicate in a world that maybe the physical structures also haven't changed to meet the demand of where we are today. When you put these things together and then couple that with you have individuals who feel like they've been replaced by efficiency, you look at these things. The way that I might shift some of the question too is the logic even of yesterday would tell us that vulnerability is a liability. And I would actually argue that point completely because there is something that is hidden in that to be a true polished, I wouldn't even say an executive Persona, I would just say a human right. In this world where we have what I'll call authority economy, vulnerability could be your greatest asset. It's candidly right now the only thing that I can't simulate. It's not to say in a world of tomorrow it won't. But if AI never felt the weight of a life altering decision or the grief of a lost foundation, what might be different? And so that's kind of where maybe I shift my thinking even a little bit as it relates to vulnerability is not a bad thing. Vulnerability is a very good thing. But it does take individuals very purposeful energy to put that vulnerability out. And that by the way, is also probably hard because look, we all judge ourselves, we're probably our worst critics. And in terms of should I do this? How will people interpret it? What if I say something that's not appropriate, what goes along with that? But I do think if you think about where we're headed, vulnerability and leading through vulnerability is incredibly important. And at the end of the day, that is how real empathy starts. Real empathy starts with saying, I don't have all the answers, whether that be today or tomorrow, but I'm committed to finding out and understanding them through and with you and creating that. Not just psychological safety, but again, I think something that's broken in today's day and age is the goals, roles, metrics don't necessarily support that. And so if human behavior inside of an organization is Driven by goals, roles and objectives. And those don't tie up to allowing us to be truly vulnerable and empathetic leaders. That's going to be really hard to see people do it more and more and more. And truly, if you actually do it in a way that is accretive, empathy then ends up being not just a quote unquote nice to have soft skill. It actually becomes a high level diagnostic pool, right? If you really, truly listen to the struggles and aren't just being nice, you start to uncover the system paradoxes that are actually slowing you down. And I think that's where we see people struggling. It's because the legacy systems we forced them into aren't at war with the tools that we've given to them. And I do think empathy is going to be a way for us to see that friction. That's something that often the dashboards or the lollipop charts truly miss. Again, I don't think we're ever short of ideas. I think we're short of actually really understanding how people view things and view the world. And candidly, I've had that across my career, right? Like I, I may come up with like lots of ideas or lots of ways to say like, I think we should look at things in a very different way. And it's really been hard because people will say, well Deb, you're just so different. And the connotation of difference over my life has really led me to think it's a negative connotation. My difference doesn't make me negatively different. My difference makes me who I am today and the way that I see the world. I needed to do the work to understand how that difference can make me more successful. And I mean that personally, not necessarily professionally, but if I lean into understanding some of those things, I do need to hone it. I get from A to Z quicker maybe than most. Doesn't make it better or worse, but it does mean then I may need to pivot how I have people learn and understand what I'm saying because I need to make sure that that path is really clear. So I do think understanding that understanding judgment, understanding empathy, understanding vulnerability is going to be a way that we can see through, if you will, the sameness. And I do think in an AI world, AI look democratizes innovation. It also democratizes very different cognitive thinking, which I think is going to be critical to us being able to solve the most complicated problems.
18:37
Something, Deb, that I was thinking about. I was trying to think in my own experience over the preceding months where I've had this sort of cognitive load or maybe fear around some of the paradigms kind of shifting under my own feet. I think one of those things has been like, as I engage with our engineering team, as I engage hands on my role for a very long time, at least as far as my engagement in kind of technical things like coding or, or infrastructure setup, that sort of thing always sort of made, made sense to me and now I'm kind of wrestling with as these agents and connected systems are aiding me day to day. That's been great. As I mentioned to Chris early on in the conversation, I'm able to get a lot done, but I've noticed a different sort of cognitive load on myself where like I get some things up and running, right, like, oh, I'm changing this design over here with this agent to help my engineering team visualize what I'm trying to say. And then I context switch over here to email and I want this document summarized over here. And then I context switch back to another email and there's a lot of cognitive load that I'm experiencing now that I wasn't before. As I go back and I see, oh, my agent over here kind of went off the rails and then have to remind myself of what to do and redo this. And so the overall kind of my work has the, the flow of that has shifted and in a lot of ways is more productive. But I've noticed a, a sort of different sort of strain on myself and I'm wondering if you're seeing also shifts in kind of people's, as you do engage with people from a kind of empathetic standpoint, experiencing this new technology, any sort of different kind of cognitive loads that people are experiencing or the way that this technology is affecting them in a, in a profound way.
24:14
Personally, I mean, for sure, I mean if you're, if you're utilizing, you know, AI in this capacity, any capacity, by the way, learning, you know, etc. There is going to be a different cognitive load because it's not. And it's not just in the way that you described it, because I agree, by the way, particularly if that's not something that again, even you learned as a way to work through problems. So it's problem solving in a very different way. And again, it's not good or bad. It is and all of our brains think and learn differently. And I do think that that's something that's really important because the way you learn may not be the way that I learn may not be the way that Everybody else learns. And that to me is probably one of the most empathetic things and most vulnerable things you can have is because I think there's part of with which that would say education has taught us that we should all think and be in one way. And again, I just fundamentally don't agree with that. I mean, we have five senses. Some could argue we have six now, but we have multiple senses. You learn from all of your senses. Standard day education teaches you to learn with two senses, so you have all these other senses that you learn with and people do learn with. And I think that actually is probably one of, again, the most vulnerable and empathetic things you could say or do to someone is that the way that you learn and understand and evolve is not the same for everyone. Even just understanding that nuance is super important because again, the old paradigm, if you think about the strain, was checking boxes, moving files, following a manual. The new paradigm, AI, does its thing, but that doesn't mean that there's not a perspective for human thinking or human intersectionality, particularly when we talk about things around, you know, whether that's hallucinations with the AI, whether that's AI bias, whether there's a whole host of new things now that we have to go learn. So it's not just about how do I build the algorithm, it's all these other things we also have to be thinking about. And by the way, even if we're not thinking about it, AI is inherently learning in that context. And so I think that it's like context switching on steroids, because it's not just the thing that we used to address, it's all these new things that we have to add in. You're not just context switching on tasks, you're actually almost switching on states of reality. You're moving from creator to judge, from empathy to data analytics, and you're doing it in a very rapid pace. We used to talk in terms of hard work being about hours and output. Now today, hard work is cognitive synthesis. So when you think about how you engage with AI, you aren't just typing, right? Like before it used to be we're typing and we'd see it on the screen or you see the squiggly that comes up that it's misspelled and you have time to go back and you change it. Now we're constantly adjudicating between what the model says, what you know to be true, what the organization needs, what you think might be false. It's a lot, right? Just to put into the Scenario. And to me, what that means. It's a constant interrogation of the truth. And that becomes heavy. It becomes heavy. You get exhausted even just after an hour of prompting because you've done a day's worth of executive judgment in 60 minutes. I do think this load is real as we think about it. Again, I'll apply it back to a statement I made earlier. It's not just the load. It's switching between human logic, I.e. nuance, EQ history and probabilistic logic. What are the likelihoods, patterns and averages? I've termed this thing of. I think everybody is now becoming a neural athlete. And that's how do you look at running a sprint on a treadmill that keeps changing speed and incline without warning. We aren't designed to live in that state forever. We're not designed to be in perpetual high velocity synthesis. And that also reinforces the need for a pause. We aren't just tired, perhaps we're becoming cognitively brittle. And again, to me, in order to be the highest effective elite and neural athlete that you can, you really have to think about not just increasing utilization, but how do you manage cognitive energy. And that's just not for you, but it's for you and for your team. And sometimes that means stopping is more important, sometimes that means pausing to question is more important. And sometimes that means the work that you did in 60 minutes, you're okay to throw it away, whereas in the past throwing away work after 60 minutes. I mean, I don't know about you, but I think we pretty much all cringed when we're like, oh my gosh, my whole life's, you know, work was in that 60 minutes and now it's gone. Like, again, part of this gets back to my comments on unlearning, right? I don't think the luxury is speed. Even though the world is moving very fast. It is focus and how we can actually think about becoming that neural athlete who has the clarity to know which problems are worth solving and the empathy to bring along the right people for the ride.
26:21
So, Deb, I think you really hit a point that is a big, big deal there from my perspective, and something I've spent a lot of time thinking about. And it's something that Dan and I have talked a good bit about both on the show and off. And it's kind of the, you know, you talked about the. I love your term about being a neural athlete and the fact that, you know, it's maybe something of an obstacle course. Everything is changing every moment or two, and you know, the speed and the exertion that you have to put. And you know, not only, you know, Daniel described cognitive load, but you pointed out the notion of the synthesis that goes with that and the context switching and that if you put all these things together, that's asking a lot from the humans at this point in terms of trying to navigate our rapidly changing world. And I know you can extrapolate out to what we see in the world every day in the news and stuff. And the world is changing really, really fast and it's. And in some ways that's better. In some ways it's quite hard. There are a lot of people out there that may find it particularly challenging to navigate this. And so, you know, the question is, what are your thoughts on? Not everybody is, is dealing with innovation on a day to day basis and new ways of thinking with AI and stuff. And so this little discussion group is quite unusual for the rest of the people out there that are just living their lives in more of the traditional way in whatever culture and part of the world they're in. How should they be thinking about this when this is not part of their normal day to day thinking? How do they navigate safely and productively? Maybe productively is the wrong word. Safely into a new world without stumbling, which is what the great fear is. And I think we're seeing that in politics and in just about everything these days. Do you have any thoughts around how to shape a larger world so that people can kind of catch up? What are your thoughts? I know everyone has their own thing, but I'd love to hear yours.
31:37
Yeah, no, and look, I mean the world's going to evolve whether we want it to or not. I mean, that is just what is going to happen. And so I look at it and well, yes, there's days I'm trying to figure out how do we look at solving really complicated health care issues, which again, I realize everybody might not be trying to solve for. But I also look at it and say again, even in my own life, to the day to day, how do I just lean into it? Because maybe the outcomes that I'm looking for will help inform my decision again. They don't have to be better or worse. I just, I love data, I'm a nerd, I love data, I love to utilize that. And so I think this is also candidly a little bit of a stumbling point too because some people look at this and say, I don't know where to start, even in my day to day. Like I don't know. And I'M like, okay, I think about this. We all do whatever we're doing every day. The constant question, what do we cook for dinner? I'm like, why does this seem to be such a challenging question? Some days, what to cook for dinner? I will use AI in that example. I will video or take photos of my pantry and my refrigerator and my kitchen. And then I will say, make a recipe. And look, I can get more sophisticated, which I do. I don't just say, make a recipe. I say, make a recipe that works with my blood type, that works with certain nuances that affect me and my own health. Or, hey, I've got to cook for four different people who have four different appetites. I have no idea how to do that. Give me a recipe that can still use these ingredients, but I could slightly modify it so every one of these people get the thing that they need out of dinner tonight, as opposed to what would normally happen is I would stress, like, insanely about what to cook for dinner. And I enjoy cooking, by the way. I love it. I love to cook. But it takes that little bit of stress out of me to be able to, by the way, very quickly, inside of two minutes, I can do an assessment of my house and what I have. I can get multiple amounts of recipes to be able to look at, and there's really no downside. So what if I mess it up? So what if I didn't do the right prompting or I didn't do the right thing with it? So what how AI is interpreting my questions? I learned that by the way, if I misspell dinner and put diner or if I misspell spaghetti, I get two very different answers. So I actually start to learn the day to day of AI in a way that does not seem overly intrusive or overly complicated. But candidly, I am learning things. I'm learning about bias. I'm learning about how a misspelling can impact, you know, an outcome. I'm learning that if I give it more direction, it gives me better analysis back. I learn that if. If I can give it more of what I need it to do for me, it will take away the friction that I need to be able to go do the thing that I need to do. And when I think about, you know, how to take, you know, quote unquote, your. Your daily life, like, that's just how I've done it. Like, you know, where we're redoing a design and part of my house. And so I'm pretty creative in general, but art is not my thing maybe in certain circumstances, but it's not in my day to day. And so I could for sure go hire lots of experts. And by the way, I have, but I also want to give them some insight into that. So how do you give them insight again? Take a photo. I'd love a modern, organic view of what this looks like. Can you help me? And I'm a very visual learner. So the fact that my model can produce back out to me some visuals, it maybe takes away the trepidation I have when I meet with some of these contractors or vendors. I have no idea what they're talking about. But now maybe I have a better way to address that conversation because I've educated myself utilizing AI in advance of that. Now there's some dangers again to that because we know AI hallucinates, we know AI could tell us what we want to hear. We know there's other bias that's just inherently built into some of the ways that we ask questions. The nicer you are to your AI, the nicer it is to you. I do think you have to understand that. And again, you can use a fairly benign daily activity to go learn that. And I would just encourage people to do that because that's going to be what helps you become more comfortable with AI. To learn AI, to understand how to dip your toe into it. You don't have to be, and you shouldn't be in a role of innovation to understand how to advance some of the things that are happening in every single part of your life that are being automated by AI either today or in the future.
33:56
I love that building up of that intuition as you're talking about. Deb and I have a kind of. Chris knows I like to ask selfish questions of our guests because I learn a lot from these, I learned a lot personally from, from these conversations. So my selfish question is kind of, I've seen a lot, whether it's on like engineering teams that, that we're working with as a company or developers that I'm working with at workshops and that sort of thing. And you mentioned that Deloitte kind of works kind of soup to nuts. So we just talked about kind of the everyday life, you know, working with these models. But I think some of that intuition is maybe shifting for developers where they bring in this, some sort of intuition because they've been experimenting with models in terms of like single back and forth interactions with a single model. And now they're forced to think about how to apply AI in kind of software integration and technical circumstances and what I often see is them trying to pack everything possible into a single kind of thing that the AI could do. And I've seen a lot more kind of shift towards the things that actually succeed in implementations being thought of as systems. And I think you use the system word quite a bit in your discussion. And so now AI doesn't really mean a single interaction back and forth with the model, but a set of things, some of which are done by AI and some of which interact with other tools and connections to a database or MCP server or it's really a distributed system now. And I'm wondering, just selfishly, if, if that kind of shift from thinking about an interaction to a model and to thinking about a distributed system sort of thinking with AI, if that's something that you've run across or, or a shift that you're even seeing.
38:48
I mean, for sure. I mean, also, the one thing I would add to that, in the distributed system analogy that you provided, it's not the distributed ecosystem that perhaps you once knew. So even if it's distributed, it's pick your favorite competitor. Your favorite competitor can now be your favorite foe or friend. Right? The ecosystem itself is also shift because we've shifted all the pieces from maybe single approach to multi model hive. So when you think about that, the legacy intuition was to create the best model in one single centralized approach that does everything. But that creates a massive systems paradox. I mean, particularly when you pack everything into one model, you inherit all of its biases and limitations in any one single point of failure. And again, given where AI is taking us, it's funny to me because people still think actually that AI is a search bar. If I ask it a question, it gives an answer, and that's 100% correct. I think that's a 2010s way of thinking because we need to have continuous orchestration. So in your question, we were continuously orchestrating maybe on one single thread. Now this is about how do you actually create continuous orchestration and a multi layered agentic approach where maybe it's actually running in the background. It's not really about the prompt, it's about the connection between the models. If your AI strategy is based on single interactions, Kendall, you're probably not building for the future. It's just a faster encyclopedia. As you think about how you're architecting them. I do think another skill. And we talk a lot about resilience. I talk a lot more about anti fragility, because I think that's how I've always learned my life. But that's also, I think, how we need to learn in an AI model. And what I mean by that is antifragility is allowing yourself to not just fail and learn. We love to say we're a failure culture. It's actually the second part of it that's most important, which is learning from the failure truly to adjust your way of thinking to become stronger. And in order to do that, if you're actually not pushing yourself even greater, you're truly not failing. We tend to live in this world where we've built failure mentality based on the things we know. I mean, I don't know about the both of you, but to me, that's not actually pushing ourselves into failure mentality. Failure mentality is actually saying, look, I expect that 20% of what I do is going to fail. And the anti fragility of that is to say I am going to become a better person when I fail from it. That is how I candidly live my life now. Honestly, that's a little scary sometimes because, I mean, I could be failing and other people aren't around me and they could be seen as, quote unquote, more successful or not just because they don't have a failure metric in their brains. I think that's the only way that I personally and also professionally will actually be able to remove even your inherent biases. I mean, everybody has a bias. We're built with biases. Whether we know them, whether they're cognitive, whether they're sitting somewhere back in the deepest part of our brains. We all have cognitive bias. And the only way we can even start to pivot that one iota of a bit is to allow ourselves that moment of failure and to actually say that I expect it to happen. If we expect it to happen, then we can become something more built around antifragility. And that to me, gets back to your cognitive load across multiple models and agents. We're creating a system that actually handles the hurricane of modern data, of modern processing of modern load. It actually allows for this intelligent disobedience. Like, I'm not being disobedient, quote unquote, for the sake of being disruptive, I'm actually being disobedient because I hope that we force a different question, a different change, a different semantic than what we've groomed around status quo. And so if one model is hallucinating, the others can flag it. It's kind of sort of a new way of looking at checks and balances that honestly has been missing in our legacy structure, whether personally, professionally, or inside of a corporate organization.
40:52
No, that's a great insight there. I appreciate that. And kind of the embracing failure to, to be able to learn and move beyond it very quickly in the way that you put that. As we're looking at kind of closing out here and you've done a great job of kind of bringing us into a new way of thinking about the AI and the world in which we live and how it's impacting people and people in their vulnerability, having to navigate through this as you're looking ahead, as you're looking at the upcoming few years and the rapid change that we're having, just the fact that people are having to adjust and maneuver their way through a new type of cognitive load and feeling that whiplash. Can you talk a little bit about where you see things going over the next few years? What would you expect next, both from the technology and from the people involved in it?
45:18
Yeah. And if I may, before I answer the question, give you an anecdote of how I'm going to come up with the answer to that question, which is, you know, I think, you know, if you were ever to look at my profile on LinkedIn or somewhere else, you'll see that one of my favorite side hustles is I train service dogs. And a lot of that, I've been doing it now for many, many years. You know, I'm what's called a quote unquote puppy raiser, which means I get a dog from age 8 weeks old to 16 months or so and they learn quite a substantive amount of commands with me by the time that they leave me. And then they're ultimately matched. The foundation I work with, they're ultimately matched with a veteran or first responder free of charge for the rest of their life so that individual could lead a more independent life. And when I think about the things that I have learned in the world, I've learned it more from both, just not just training a service dog, but actually from the things that people have done to fight for our freedom, but also what and how these have an impact on them and their lives. And so, and I correlate that to leadership. So there's nothing like thinking on your feet when you've got a eight week old puppy in your hand and you're in the front of a boardroom and it's got projectile, everything coming out of every way you could possibly think of. I can't stop the board meeting. I can't stop what I'm there to do. Like, what is it that I need to Understand in that moment to be able to do that. But even more so, you know, trying to force a dog to learn over the course of an hour and it's done learning after five minutes, it's like, hey, I can't do it. I'm a nine week old puppy. Like you learn how to adjust the training style based on the thing that you're looking at. And by the way, we do all of our training with positive food reinforcement, but all positive reinforcement we don't have.
46:16
Fantastic.
48:04
Yeah, we don't have the command. No. As an example, in any of our commands, we actually use eye contact and name recognition or other voice recognition as a way to distract, to get the focus back on us. Because at the end of the day a lot of these dogs, again, whether it's traversing a busy street, whether it is being there to help someone who's blind and deaf who relies solely on this dog for their way to get across a busy street, it actually has to be trained for the edge scenario. And so it's not about, yes, of course I have to make sure the dog can walk the person across the street, but I have to train for the edge scenario. So when something is going to go wrong, because it inevitably will, how can that person have the utmost confidence that that dog is going to be able to guide it into safety? And that's what I think about the future. When I think about how we think about what the future is to bring, it's thinking about the world in that capacity. It really is about the edge solutions, the edge challenges, the edge capabilities built in a world where we have ethical trust. Again, same thing is happening here. The trust and the bond between the dog and its handler, hugely important. Even if I taught it every single command, quote unquote, perfectly, if that trust isn't built, it doesn't matter. And so when I think about the way that I've evolved personally over now the last 15 plus years in doing this, the amount of what I've learned, that translates from service dog training into leadership. A, I could have learned more from that service training than in any leadership skill. But also, by the way, the empathy of understanding veterans and first responders and or anyone suffering or struggling to know what it means to have that empathy is truly high. Because how I may get in a ride share or how I may get on a plane, train or automobile is very different. And so understanding that and then designing for that is what we need to think about in the future. It's not about designing for ourselves, it's actually about designing for others. And that look, I know a lot of people can look at that and say that sounds like motherhood and apple pie. Deb. It's really good, but it is really core to that. When you think about some of the greatest technology that we've ever built years past or years forward, it's because they were solving for something that didn't exist for a different set of purposes. And that to me is really how we're going to look to the future. Again, I think if we're real about all the things that we talked about, the emotional toll is real. Once the busy work is gone, we're left with the hard work. That is judgment. And at the end of the day that is how we can get to manifestation learning. That's a different rhythm. It's not a failure. It is in my mind, the power and a different kind of power. As we think about the foundation for tomorrow, that's going to be how we solve the really, really hard, truly complex problems. And it doesn't mean that you have to be the Chief Innovation Officer. It doesn't mean that you have to understand tech. It means that you're willing to be vulnerable and try things that perhaps you didn't think you know, or things that you actually want to go and change about the way that you view the world.
48:05
Well, I think that's an amazing perspective to bring as we close out here, Deb. It's really encouraging for me personally as we, as we head into this year. And yeah, I just want to thank you for taking time out of your work to join us and talk through these things. I think our listeners will really appreciate it. So thank you so much for joining us. Deb, hope to talk again.
51:19
Thank you so much for having me. It's been a true pleasure.
51:40
Alright, that's our show for this week. If you haven't checked out our website, head to PracticalAI FM and be sure to connect with us on LinkedIn X or BlueSky. You'll see us posting insights related to the latest AI developments and we would love for you to join the conversation. Thanks to our partner Prediction Guard for providing operational support for the show. Check them out@prictionsguard.com also thanks to Breakmaster Cylinder for the Beats and to you for listening. That's all for now, but you'll hear from us again next week.
51:49