Welcome, everyone, to the AI and Business Podcast. I'm Matthew DeMello, Editorial Director here at Emerge AI Research. Today's guest is Baker Johnson, Chief Business Officer at UJet. UJet is a next-generation cloud-contact center platform that leverages AI to modernize the customer experience. Baker joins us on today's show to clarify why enterprises have invested so heavily in conversational and agentic AI without achieving the expected CX gains, and how decades of treating customer experience as a cost center have led to sideload processes, misaligned metrics, and poor returns. Our conversation also covers practical workflow changes that drive real ROI, redesigning processes rather than automating broken ones, shifting from deflection to outcome-focused metrics, reconciling systems of record with real-time interaction data, and establishing a healthier balance between human and AI agents. Today's episode is sponsored by U-Jet, but first, are you driving AI transformation at your organization? Or maybe you're guiding critical decisions on AI investments, strategy, or deployment? If so, the AI in Business podcast wants to hear from you. Each year, Emerge AI Research features hundreds of executive thought leaders, everyone from the CIO of Goldman Sachs to the head of AI at Raytheon and AI pioneers like Yoshua Bengio. With nearly a million annual listeners, AI in Business is the go-to destination for enterprise leaders navigating real-world AI adoption. You don't need to be an engineer or a technical expert to be on the program. If you're involved in AI implementation, decision-making, or strategy within your company, this is your opportunity to share your insights with a global audience of your peers. If you believe you can help other leaders move the needle on AI ROI, visit Emerge.com and fill out our Thought Leader submission form. That's Emerge.com and click on Be an Expert. You can also click the link in the description of today's show on your preferred podcast platform. That's Emerge.com slash expert1. Again, that's Emerge.com slash expert1. Without further ado, here's our conversation with Baker. Baker, welcome to the program. It's a great pleasure having you. Hey, Matthew. Excited to be here. Thanks. Absolutely. One of the things I love about talking to CX leaders is across industries, very much so in financial services, but very much beyond. This is the canary in the coal mine for so many AI adoption challenges across industries. We've had a lot of folks come on the show, talk about how their organizations entered the AI race through CX, expecting automation to slash costs and reduce reliance on human agents. There's been massive investment, especially in these areas, but call centers are still struggling with complexity, high operational overhead, and customer expectations that are just not deflating anytime soon. There's these things called agents coming across the horizon. In many ways, they're already here, but they are still remaining critical to the process in this transition. Human agents, new digital agents. As a leader pioneering agentic and conversational AI development, I'm wondering where do you see the real challenges in customer experience if not in the agents themselves? It's a loaded question. And I mean, I really think you kind of have to peel back the onion. So let me just sort of break it down. And it's a little bit of a history lesson. But I mean, I think fundamentally this is a systemic issue, right? and and and the issue starts with what do we call customer experience right you know i feel sometimes like i'm living in a black mirror episode here right because you know i come up through sort of a sales and marketing background and when i think about customer experience it's literally every single point of interaction with a brand across sales marketing and support whether that's you know in a retail setting it's you know from the first ad impression all the way through the point of sales support loyalty renewal referrals and so on and that applies in health care that applies in financial services, you know, but, but for any businesses that serves customers, that entire life cycle is the customer experience. But what we've done in the contact center or in the customer support industry is we've taken sort of this assembly line mentality, this waypoint, and we've tried to claim that that is the customer experience sort of point of impact. Right. And so we have all this lip service about being customer centric, but we're really back in the same old departmental siloed inside out thinking approach to the customer and to the business outcomes that we're using to underpin all of our strategy right so that becomes the next big issue right we've relegated cx to be a support term and so that means we've dissociated it from like revenue loyalty business brand building so we're now like 30 years into operationalizing customer experience as a cost center. And so everything is about ruthless efficiency as the measure of success, right? And so from that thinking, we minimize the investment, we stitch together point solutions, dated legacy technology. And then even when we have the opportunity to introduce a transformational technology like AI, We literally just apply it to automate that legacy process and thinking. And we measure success by how well we can deflect the customer. And then the poor human agent is left kind of holding the bag, and we blame them for their inability to be the human integration bridge for all of this. Fundamentally, it is a mindset challenge. The technology is there. The process needs to be reimagined. And agents and the technology need to work hand in hand. Now, this runs ever so slightly contrary to the LinkedIn post, which you made a little while ago. Your team put it on my radar and was very prescient to today's conversation. What I'm getting out of your last answer versus the LinkedIn post, I'll describe it in a moment. But what I'm getting out of your last answer is what we've seen in CX now with all this data and technology was it started with all this promise. Oh, we're going to get closer to the customer. It's going to change their experience. What we really did was just or what really ended up happening is CX ends up in a silo. Other things end up in a silo. But then CX is just about getting you the DMV faster, you know, faster, more efficiently, everything else. This runs a little bit contrary to your LinkedIn post, which we can we can set aside and I can ask about later. But your LinkedIn post was saying, you know, maybe on the other side, the more the sales side, the promising side of CX when data and AI was first coming in was, oh, we're going to the customer experience, you know, for calling your bank is like going to be like, oh, like going to Disneyland. You don't need it to be like Disneyland. The DMV is not Disneyland. You need you need you do need that speed and approach of knowing, no, this isn't supposed to be a vacation for the customer. They just want it to be painless. or that was kind of what I was reading between the lines of that LinkedIn post. Am I getting that last answer correct? Your LinkedIn post, am I getting the right read there? You got them both right, but I'll make a distinction for you, right? And I think the opening line on that was stop trying to delight your customers and just give them their time. Yeah, yeah, delight is not the point, right? The whole challenge with this efficiency mindset is don't speed up a bad experience, right? The thesis behind that post was the idea that nobody wakes up wanting to have some delightful experience And we actually because of all these challenges and the call center mentality and the limited investment in the support experience right We have a bunch of companies in the space, candidly, who've put marketing around delivering experiences worth talking about and delightful experiences. Nobody wants that. Like, just take my money and give me what I want, right? The best interaction is the one that never happens. So the idea of that post is, you know, we were announcing an acquisition of a really interesting conversational analytics company, Spiral, that allows us to analyze and understand sort of the conversation at scale that's happening in the contact center. And it's less about, you know, deflection and AI as an intermediary. And it's more about actually designing an organizational blueprint around customer concerns. Absolutely. Absolutely. And I'll always say, you know, when and when I'm at the proverbial high school reunion, we're getting towards the end of the year. Maybe folks at home my age probably relate to this, but they find out what I do. And then they're like, oh, have you seen like, you know, the whatever video that was definitely made with AI of the airplane and everything else? And you can read into the specific post. I mean, I'm like, no, I really I don't look into that stuff. I look into fraud detection, drug discovery, CX, and I'll say, you know where AI is really valuable outside of any hype is, when was the last time you called your bank? Did it feel like the DMV? No, but none of them are talking about it like it's going to Disneyland. Like, oh, it was the best part of my day. That's what I looked forward to, calling my bank. No, they're never going to look forward to calling the bank. But the point is, now it's painless. Just to separate what you meant by delightful experiences versus really frictionless, which I think is emerging as the realistic goal there. But really turning to enterprises more broadly, why have so many invested heavily in conversational AI without seeing the expected returns? Where's that disconnect happening over expectations? Yeah, look, I think you and I were talking before the show about sort of some of the terminology, right? There were these sort of first-gen chatbots that turned us all off where there were really sort of these glorified search engines into knowledge bases. And then they'd ask you, was that helpful? And then we went to sort of conversational. Now we're in the generative or agentic era, and it's hard to sort of make a distinction between those things. So, I mean, I think the first one, candidly, is that it's early innings in this next generation of generative AI and that everyone's sort of going through various phases of trial and error. And I do think that is the path to getting it right here, that the enterprises that are the early adopters that are deploying, learning, kind of containing mistakes and building on it are getting ahead. But the bigger issue is actually that we're trying to retrofit AI into an existing set of processes, right? It's always people process technology. And the phrase I've been using, I've actually found in a Wall Street Journal article from 87 having to do with sort of industrial manufacturing. And it was about not paving the cow paths, right? It's like, you know, you're taking some, you know, poorly designed, poorly thought out process, and you're making it 10 times more efficient. Right. And so, you know, that is not transformation. That's accelerating the problems that you have today. And so, you know, you couple the concept of you're trying to automate legacy process. you are measuring success based on, again, efficiency of how well we deflect a customer. We're using AI as a gatekeeper between human agents. If you view human agents as a cost and you think that AI agents or virtual agents are a cost savings, the blunt force approach is literally just apply AI to make it as hard as possible for customers to get to our human agents. and what you're left with is human agents dealing, by the way, with the most complex issues and the most complex customers and they're set up for failure, right? By the time that customer actually reaches the agent, they're kind of livid, right? And you touched on the last piece of the puzzle there, which is sort of this delight fallacy. You know, we're taking this customer journey design approach to how we think about AI and modeling journeys for customers, but like, I don't want, I'm, I'm driving. I've got my three boys in the car, you know, there's devices playing and everything else. And I just need to transfer money or I just need to get, you know, the re the return approved. And I don't want a delightful journey. I want the outcome. Right. It's, it, it, it isn't a journey. It's a desk. It is a destination contrary to so many other parts of life we find meaningful. And also the, the AI adoption process, which is a journey, not necessarily a destination. Another fallacy in there that you were touching on in your last answer that I think the audience is noticing, I'm noticing this more and more, but I'd like to bring it up between industries, is the zero-sum fallacy around human labor and the exchange to the AI-driven, transformed organization. It's not zero-sum of if you just hire or you deploy this many AI agents, They will do the work of X.5, whatever human beings. It never, ever ends up that way. And that complicates the conversation both around what we're seeing on the political economic side for labor, which everybody else can listen to another podcast about. But on the management side, you need to manage up to describe, no, the transformation is not one for one. And if you're looking at it only, like you said, as human agents, as a cost, then you're already kind of off on the wrong foot with your adoption. Getting to the core of kind of the problem in this space, and I know we were touching on this a little bit before the microphones turned on, is kind of the murky language around chatbots versus conversational AI, which is now even more precarious now that we have digital agents, agentic technology coming down the pike. and speaking very informally, colloquially, just how people come on the show, whether they're in the CX space or outside. I've kind of found that chatbot reversed to the glorified search engine as you described it, pre-pandemic, very deterministic. I could string a couple words together. It could maybe answer really, really dumb questions, but it's mostly just there to make you feel like you're heard by the worst thing to talk to, a robot. And then conversational AI was, oh, we have generative capacities. This is a little bit after the open AI, a chat GPT explosion. This thing has much better predictive capabilities when it comes to language. So good, it does feel like it understands and talks to you. But that's not what's going on. It is predicting what you want to hear next, which is actually a lot of human behavior in customer experience, which is why it gets murky. If we can maybe dive into how and why that was so problematic and why those terms are so murky and what is the difference between these technologies and agentic, what are you finding are the biggest misconceptions there? Well, funny enough, I think what you talked about, the trend with your audience in terms of AI replacing humans, I think understanding the capabilities of the AI at various stages of the technology evolution and the jobs to be done between AI and humans. is at the crux of that confusion. And fundamentally, I actually think the two biggest misconceptions around, regardless of what you call it, right? Nobody's deploying chatbots anymore, right? Well, we're calling that person. You know, there is this sort of NLU, NLP, like this technology. I mean we all had dragons It almost a slur It almost a slur for bad technology at this point And then you get into again you mentioned open AI but all the generative large language model based stuff Now, I'm seeing a big pivot towards small language models, and that has to do with fine tuning and enterprise data readiness. But here's the two biggest misconceptions. The first one, AI will replace every human agent. The second one, AI will never replace every human agent. Right. Those are actually it's this binary thinking about AI being able to like all or nothing wholesale replace the human labor workforce. workforce. And what we know is that today it's not there. And part of why it's not there is because we are applying AI to the wrong jobs. We're giving it goals and standards that are actually not supporting the best overall outcome for the business and for the customer. And it's doing that very effectively, right? It's the human agents on the back end when AI actually achieves the objectives we've given it that are there to clean up the mess, but then they get blamed for the mess itself, right? And being ineffective at dealing with customers or anything of that nature, right? So I think, you know, first is the biggest misconception is this all or nothing thing. The second, I have talked about small language models, but it's this sort of out-of-the-box myth. and I'll step aside from sort of the enterprise application of AI for a second. And I know you talk about this stuff and you're very hands on with it, but I know so many folks who still treat, you know, we talk about the chat bot like a search engine, but people are still treating generative AI like a search engine. Like, oh, give me a business plan for X, Y, Z. And then it's giving you something really generic and it's hallucinating. You know, there's so many different factors that go into how you think about assigning a persona and a task and the context and reference artifacts and how you tune. And so all of that is just as important in how we individually use generative AI as it is with how we apply it to enterprise use cases, right? And so you can't just turn up and famously, by the way, some of the, you know, I remember the car dealer that put, you know, open AI on their website and it sold somebody a couple of dollars, right? So you have to have folks who are being really intentional about curating. And by the way, there's fair use policies and compliance and all these things that have to be considered and deployed in sort of small amounts. The biggest issue at the core of all that, though, the challenge for them is data. So you cannot not reasonably deploy a modern agent and give it agency, right? The responsibility, the autonomy to reason and decide unless it has effective data. But fundamentally, we're still dealing with broken enterprise systems, right? So you have a legacy system of record, CRM, ERP, something of that nature. And then you have a real-time system of interaction, typically in the contact center, or maybe an agent on the website, something of that nature. And the challenge is that we're trying to put the virtual agents as an extension or an augmentation to that real-time system of interaction for the consumer. But there's too much latency in the data and the context that it needs for that to be accurate, effective, informed, and accountable. right so until until we have a measure in place for the enterprise and a process in place for them to be able to reconcile that system of record and system of interaction you're going to have a lot of failures with with the deployment of these technologies yeah in so many words in other industries data governance if you're just slapping the conversational ai you know up there like the car dealer as you mentioned then it really is no better then it's just a fancier chat bot, but that's on you, which gets into the more philosophical problem of hallucinations and why anybody on LinkedIn saying, oh, we'll have hallucinations figured out within the next few years or in the next few months, they're dead wrong. No, it's a more philosophical problem of, well, then you need to rely on humans for good inputs. And humans, as we all know, will never be perfect, whether or not we get this technology right or not. A couple of other things that you really brought up. Do I have one thing to that, Matthew? Please, please. So data governance, yes, which is about compliance, enterprise management of that data. The other side of that coin is knowledge management, right? And that's not so much about the compliance as how effectively are we gathering information from all these different departmental silos and leveraging that view of the customer, both at scale and aggregate to understand trends and requirements across our markets, but also for the individual. So there's one to one personalization and understanding and context around your history as a customer where this all becomes really important. And I don't think we're going to get into it today. But, you know, the next gen of all this is consumer AI. What happens, by the way, I'm not going to call the bank. Right. I don't want to call the bank to handle this issue. I'm just going to have my my AI agent do it. Right. And so then we have to think about A to A or agent to agent and this MCP protocol and how we manage data governance and knowledge management when we're acting with AI as intermediaries between the business and consumers. And then this, you know, as a lot of people may feel, slightly dystopian view that the entire internet will just be robots talking to each other while we kind of sit in husks. I was on Reddit last night. I'm pretty sure that's already the case. A lot of it. A lot of it might be that way and has been for a long time more than we're even able to admit or quantify in a short conversation today. A couple of things, even in your answer going back the last few questions, you know, just even I think this impulse in the public, you know, to use conversational AI as a search engine, I'd call it out for as ludicrous as it is. But if we're talking about search engines writ large, we're in the dark ages of search. So you would want to scold people for why are you using a converse or generative AI as a search? Well, there is no better port in this storm right now. And it's a storm. I think the other thing that you brought up is as we were trying to nail down, you know, where the human AI balance will be outside of the pure ideologies of all human or all AI, as you were talking about it before. I think we get into this space where the jury's still out. I don't know if this is a fallacy yet, but in certain industries, you know, they champion this idea of, well, the future of human skills and labor is going to be robot commanders. You're going to have all these agents under you. That may be the case in some industries. I think kind of the other half of that coin is a little bit more humans in the loop, not necessarily that they're commanders. No, they're at the forefront of new processes. They're trying to standardize those processes. They're using, you know, the most manual pen and paper processes. I'm still a big fan of pen and paper as the audience well knows. That's how I do my interviews. And those things are not the enemy when you get your AI black belt, when you're in your AI white belt, all manual processes are the devil. Oh, we got to get rid of them. That's the whole point of this technology. When you got your AI black belt, you're like, no, we need to find the places where the pen and paper is going to bring that to the forefront of our minds. Cursive is not the devil. It makes us, it brings us as humans in a very tactile way in our neurobiology, very much closer to what we're writing down. And that is a very useful thing in these technological times, but it needs to be put in the right place. Just all to kind of ask you what do you think is the right balance between AI and human agents And what would you recommend to new leaders you know starting from ground zero with their CX operations building AI adoption to achieve that right balance for their organization You know, I love how you talk about our neurobiology and putting pen to paper and kind of connecting with it. I think what gets lost in all of this is, you know, it's AI and business. And so everything starts with AI, but everything should start with the human. Well, like it's, you know, we're all here for some purpose that we're trying to achieve. And our work lives are about creating value for ourselves and our family and society in some way, shape or form. And so I think we have to lean into our human nature. And I think that's where our greatest differentiation from the technology is. And, you know, I've always had prior to AI, you know, we have software tools that we use in sales and marketing. and everything else. And I've always had a rule with my teams that we can never invest in a tool until we have absolutely fundamentally exhausted all of the other manual resources. So it starts with pen and paper and then we go as far as we can with a spreadsheet and then we retire the spreadsheet and maybe go into the system. And the reason for that is if we just go straight to some sort of a tool that automates everything, we don't have the ability to be in the loop because we haven't learned and developed the ability to spot an anomaly or something looking off. We're just in full trust and dependency on the output of the tool and the model. And that is amplified a hundredfold with AI. So I couldn't possibly agree more that this is more about human nature. This is more about process design and data and process design and human in the loop understanding and intentionality is actually the fundamental linchpin for success here. And it's the number one reason that enterprises are failing. We are leading with the tool and not with the thought process. Right. I almost call this the cursive fallacy. And I got to give credit like 10, 12 years ago, I was one of those Facebook jerks, very much a small T technocrat being like, why did I teach cursive in school? I got to teach coding. And I had an 11th grade English teacher, greatest educator I've ever come across in my life, biggest impact on me, come on my Facebook page and go, dude, dude, don't, don't smack cursive. Cursive is so important. And the farther I get with these technologies, the more I understand he was 10, even if I have a hundred agents, you know, a couple of years from now, all doing my bidding, I am still going to be pen to paper, uh, because of, of, of that insight. Um, yeah, just any, any advice, uh, you know, uh, I know you're touching on a lot here, but even just any advice for leaders getting started in CX right now, even maybe outside of the human AI balance problem, just in terms of all the misconceptions we've been talking about with these technologies. Look, the transformative potential of this technology is outrageous, right? The absolute biggest mistake that you can make with it is starting from where you are and iterating. The status quo is not something that we arrived at thoughtfully. The status quo is something that we accumulated like debt over years with bad thinking and bad technology. And if you want, by the way, all of your own personal history that you've used in your career for the last 20 years to get to where you are is on the brink of being obsolete and irrelevant. And if you have any intention of being a productive leader or part of the workforce for the next 10 to 20 years, you need to personally start with a clean sheet of paper. You need to embrace this technology, but not for the technology's sake. You need to embrace it for its ability to 10x or 100x yourself. But it all starts, to your point, Matthew, with pen and paper and getting what you care about down on paper, how you think about it, what ideal process looks like. and with no sacred cows whatsoever and no commitment to maintaining past thinking. I think about like Kodak and some of these companies that famously went out of business because the technology that they sold evolved and they didn't think it would be a threat. Now it's not about the technology that companies sell. It's about the technology on which those companies operate. And, you know, being a software guy and being out in Silicon Valley from time to time, you see companies that are three, four or five people that have grown to 100 million in revenue over 18 months or valued at a billion dollars. And they are not using AI to automate enterprise processes. They are using AI like a co-worker, co-designing and co-modeling their business for a new type of future. And I think that's the advice is, you know, you got to get on board with this stuff pretty quickly. Exactly. And a big reason why I think a lot of the zero sum conversation around, oh, well, you know, you'll hire this many AI to do the job of, you know, ex-humans loses that, which will be the actual ROI of the transformation. It can't be quantified in I changed this many, you know, motherboards for this many humans. that's not how it works. And I really, really, really just appreciate you coming on the show and really spelling, articulating that for this audience in so many words. Baker, really, really appreciate you being here. Thanks so much for being with us this week. This was awesome. Thanks. wrapping up today's episode i think there were at least three critical takeaways for customer experience operations and digital transformation leaders to take from our conversation today with baker johnson chief business officer at ujet first redesigning processes must come before deploying AI. Automating legacy workflows only accelerates existing problems. Starting with a clean sheet of paper creates the clarity needed for real transformation. Second, effective AI and customer experience depends on reconciling data across systems of record and real-time interaction channels. Without timely contextual data, even the most advanced models will fail to deliver consistent outcomes. Finally, the right balance between human and AI agents requires intentional workflow design, not zero-sum assumptions. Treating AI as a collaborator rather than a gatekeeper empowers human agents, improves customer outcomes, and produces measurable ROI. Interested in putting your AI product in front of household names in the Fortune 500? Connect directly with enterprise leaders at market-leading companies. Emerge can position your brand where enterprise decision makers turn for insight, research, and guidance. Visit Emerge.com slash sponsor for more information. Again, that's Emerge.com slash S-P-O-N-S-O-R. I'm your host, at least for today, Matthew DeMello, Editorial Director here at Emerge AI Research. On behalf of Daniel Fagella, our CEO and Head of Research, as well as the rest of the team here at Emerge, thanks so much for joining us today, and we'll catch you next time on the AI in Business podcast. Outro Music