The Artificial Intelligence Show

#192: AI Answers - Responsible AI Adoption, Agency Transformation, Rethinking Workflows, Data Privacy, & Leadership in the Age of AI Agents

50 min
Jan 22, 20263 months ago
Listen to Episode
Summary

This AI Answers episode addresses 14 questions from business leaders about AI adoption, covering agency transformation, responsible AI implementation, workflow redesign, and leadership challenges. The hosts discuss how AI will fundamentally reshape organizational structures, requiring new skills in orchestrating human-AI collaboration and managing autonomous agents.

Insights
  • Marketing agencies must evolve beyond tactical execution to offer change management, AI literacy training, and agent development services to remain viable
  • Organizations need to redesign workflows from the ground up rather than simply adding AI to existing processes, considering where humans vs AI agents should handle different tasks
  • Leaders face unprecedented challenges in orchestrating human-AI teams with no established playbooks or business school training for managing autonomous agents
  • Responsible AI adoption requires understanding current model capabilities and planning for rapidly advancing autonomous capabilities, not just current limitations
  • AI verification and fact-checking will become critical roles, either as dedicated positions or expanded responsibilities for existing editorial staff
Trends
AI agents performing multi-hour autonomous work sessions without human interventionShift from tactical AI assistance to strategic AI agent orchestration in organizationsConsolidation around 3-5 major AI platform providers similar to cloud computing market structureGrowing need for AI literacy and change management services from external consultantsEmergence of AI verification roles to combat hallucinations and ensure accuracyTransition from human-centric to hybrid human-AI organizational structuresIncreasing autonomy of AI agents handling complex, multi-step business processesGrowing importance of workflow redesign over simple AI tool adoptionRising demand for leaders who can manage human-AI team dynamicsEvolution of traditional roles like SDRs toward AI agent management rather than direct execution
Quotes
"No one has gone to business school for that. There is no leader who has been trained to live in an environment where you have these AI agents that are capable of doing human tasks. And you have to not only envision your organizational structure with agents and humans, but then you have to manage the orchestration of all that, the risks of it, the limitations of it."
Paul Raetzer
"I truly am. Just like every workflow we go through, I think over the next 12 months, across every department in our organization, we will have these conversations. It's like, okay, where, Where's AI at now that we can fit it in? Where is it going to be in six months that we don't want to hire for this role because we think AI is going to solve it?"
Paul Raetzer
"People who can do what I just explained within a company, pick a department level at a, you know, horizontal ops level. People who have the knowledge of AI's capabilities, have the business sense of what a workflow looks like, or being able to work with department leaders to define those. Those are insanely valuable employees."
Paul Raetzer
"If you're getting paid writing blog posts, like, if you're getting paid to do these very tactical things and you're not seen, as Kathy's saying, as an advisor, like a true consultant, then you could be in trouble. But if you are seen as a problem solver and someone who helps drive innovation, you're going to have tremendous opportunities to keep going."
Paul Raetzer
"Everything we're doing with AI is to create a more human organization where when the human touch or authenticity matters, that's what we're here for."
Paul Raetzer
Full Transcript
2 Speakers
Speaker A

No one has gone to business school for that. There is no leader who has been trained to live in an environment where you have these AI agents that are capable of doing human tasks. And you have to not only envision your organizational structure with agents and humans, but then you have to manage the orchestration of all that, the risks of it, the limitations of it. Welcome to AI Answers, a special Q and A series from the Artificial Intelligence Show. Paul I'm Paul Raetzer, founder and CEO of SmartRx and marketing AI institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of AI. But we never have enough time to get to all of them. So we created the AI Answers series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing. Whether you're just starting your AI journey or already putting it to work in your organization, these are the practical insights, use cases and strategies you need to grow smarter. Let's explore AI together. Welcome to episode 192 of the Artificial Intelligence Show. I'm your host Paul Raitzer along with my co host Kathy McPhillips, chief marketing officer at SmartRx. Hi Kathy.

0:00

Speaker B

Hello.

1:22

Speaker A

How are you today? I'm actually in the office. Kathy's at the home office. We are recording this on Tuesday, January 20th, which anybody in the Midwest I assume is dealing with the cold, where it's like 15 below zero in Cleveland today. So I did not want to leave the house this morning. My kids are at home. Everybody's at home in Northeast Ohio, so. All right, so today's episode is AI Answers. So this is presented by Google Cloud. This is our series based on questions from our monthly Intro to AI and Scaling AI Classes classes along with some of our virtual events. And I think we have a question today that was like a LinkedIn direct message I got with someone beyond our Intro class. So special thanks to Google Cloud for sponsoring this series. As part of our AI Literacy project, we have an amazing partnership with the Google Cloud marketing team. In addition to sponsoring this AI Answers podcast series, Google Cloud is our partner for monthly Intro to AI and five Essential Steps to Scaling AI Classes, which we do free every month. In part because of that partnership, they also are sponsoring a collection of AI blueprints, some of which we are going to be releasing is that in February. We're going to probably talk about that. We have a big AI for Departments week coming up in February. We're going to launch three new blueprints and then our Marketing AI Industry Council. So you can learn more about google cloud@cloud.google.com we have some other notes about other events that are coming up. So a bunch of free educational events that we've got through SmartRx of the next, like, 30, 45 days. Kathy will touch on those at the end. So I guess we'll just kind of dive right in. Kathy, and you can give the overview of how this works. This is not a replacement to our weekly. If you're a regular weekly episode with Mike and I that. That continues on. This is a special series that we do. And this is the 12th. Kathy. I believe this is the 12th.

1:23

Speaker B

Yep.

3:07

Speaker A

AI answer. So, all right, I'll turn it over to Kathy. She'll give us a little rundown and then she will guide us through the questions.

3:08

Speaker B

Great. So, as Paul mentioned, intro and scaling AI happen once a month. Ish. And then we take the questions that were not answered or the question that were answered that were just really good questions, we bring them to you in podcast form. Claire and our team helps us synthesize all of these so we can get them in a usable format for today. So, Paul, I know you squeeze this in between a few meetings, so I want to rapid fire some of these questions.

3:13

Speaker A

Okay.

3:35

Speaker B

So let's get going.

3:36

Speaker A

Sounds Good.

3:37

Speaker B

Okay. Number one for marketing agencies, specifically, where do you see AI creating the most leverage in the next 12 to 24 months? And what roles, services, or ways of working will agencies need to rethink first?

3:37

Speaker A

Yeah, so if you're new to the podcast or aren't familiar with my background. So I owned a marketing agency for 16 years. My agency was HubSpot's first partner back in 2007, outside partner that became kind of the origin of their partner ecosystem. So I'm very familiar with the marketing agency world. I sold my agency in 2021, but, you know, I've certainly stayed close to it. We have our AI for agency summit that's actually coming up on February 12th. So this is a thing. Even though I'm not running an agency anymore, I think a lot about it, and I still have a lot of friends in that agency space. So I don't know the couple of things that really come to mind to me from a service perspective where I see potential value creation, where there's leverage on pricing, because you can't just do what you've always done with Billblower. So if you're. If. If it's an agency or, you know, again, if you're a brand person and you're thinking about a perspective of I work with agencies, the traditional things that agencies got paid for just aren't viable as the future. So I really think about things like change management. So when we talk about AI literacy and transformation, it requires significant change management. And so I see agencies being able to play a significant role in that. I think that helping drive adoption at a personalized level within teams. So like if we're talking about marketing agencies specifically in marketing departments, being able to go in and provide some education and training and consulting around adoption at a personal or team level, like what are the use cases and technology these companies should be using? And then I do think that there's going to be a lot of opportunity to do agent and app development at scale. So we've talked a little bit about this on the podcast recently, how the barriers to build apps and like minimum viable products of ideas is just gone. Like I've touched on lovable a few times on the recent podcast episodes. And so I think back to my agency days and that ability to like have an idea for a client, an innovation, a go to market idea and then be able to just build a concept of it like on the fly. So I do think that a lot of organizations are going to really struggle with how to build and integrate AI agents and then AI powered apps. And so I think a lot of agencies have an opportunity to evolve and start offering more of those services and you don't need to be a technical agency to do it. So yeah, I think the future is still a little bit murky as to what exactly agencies look like in three years. But I think change management, AI agent building and orchestration integration, like those are just fundamental things that are going to be needed by every company. And there's very few brands that we've interacted with that have a plan of how they're going to do that. They're still really early and that's usually a good sign for a service company to get in there and build some value added services around it.

3:50

Speaker B

Absolutely. You know, and I've been working on the AI for Agency Summit Marketing and I was, I'm working on an email going out this week and I was just thinking about like I worked at two agencies, I had my own agency. By definition, my daughter works for an agency and so it's like been very top of mind and I'm a, I was a trusted partner of my clients. Like it wasn't, it's probably very hard for them to say like we don't need you because they love working with you. They trust you, they, they trust your. All the things you're bringing to the table. So, like, what else can we be doing to help those, those brands? It's not just like, oh, we don't need you. You're doing this stuff that anyone can do. It's like, we actually really like working with you.

6:40

Speaker A

Yeah, but if you are only, I create landing pages or I write emails. Like if, if you're getting paid writing blog posts, like, if you're getting paid to do these very tactical things and you're not seen, as Kathy's saying, as an advisor, like a true consultant, then you could be in trouble. But if you are seen as a problem solver and someone who helps drive innovation, you're going to have tremendous opportunities to keep going.

7:20

Speaker B

Yep. Okay. Number two, if only a handful of major labs created these systems, how is it possible they don't fully understand how they work? And what should that mean for leaders applying or adopting AI today?

7:42

Speaker A

A little context, I guess, on this question. So the concept here is like, how do they not understand how they work? This is something I think I said during the intro class. It's probably where this question is coming from. So the way a language model works, which is like the fundamental underlying architecture within ChatGPT and Google Gemini anthropic cloud, is the thing. It's basically making it possible. The technological capabilities, the engineers, the researchers who build and train these models don't fully know why they're able to learn and do things. They just know that when they give them data and they provide examples of what good looks like, they, they just learn. And I think the example I might have mentioned on intro was it's kind of like gravity. Like, we know it's a thing, we know how it works, but we don't really know why gravity exists. And so I think that's kind of how language models can be viewed. It's like we fundamentally kind of get it. Like, we know what causes them to learn, but we don't know why it works that they learn. And so I think that's part of, you know, just kind of the nuance of this situation we find ourselves in. It's a new kind of, I use the word alien technology because we don't really understand it. Now. That doesn't mean that we can't adopt it in businesses and figure out ways to apply it and figure out ways to, you know, drive efficiency and productivity and innovation and growth. But as leaders, it's just understand that this technology is still pretty new. We're still trying to understand the fundamentals of it, and it doesn't really change anything, but it enables you to then I guess unpack the limitations of it and the weaknesses of it. So if you do have a little bit deeper understanding of the nuances of how they work and how they're trained, you can start to realize like, okay, so I kind of get now why they hallucinate or why they might not be as factual as like traditional software. An AI agent isn't going to just do 10 steps that I'm telling it. It's not like that. It's kind of got some ability to figure out its own path. Like that knowledge helps you as a business leader better plan to use these tools in a responsible way.

7:56

Speaker B

Yep, excellent. Number three, why, why does responsible AI matter more now than ever than even a year ago? What are one to two responsible AI mistakes you're seeing organizations make right now? And how can we self correct while we're early on?

10:04

Speaker A

The responsible AI approach matters now because they're getting more powerful, they're getting smarter, they're getting more generally capable. And so in terms of like mistakes people might be making, it could be planning as though we have models from 2023, 2024 that didn't have reasoning capabilities, didn't have more autonomy in terms of agenda capabilities. Like these things are getting pretty sophisticated in terms of what they're capable of doing. And if as a business or as a leader, you don't understand the implications of that, you can make missteps in your business strategy, your budgeting, your staffing structure. So all of those things start to come into play. So to build a responsible approach to this, like a human centered approach, you have to account for where the technology is and where it's going in this very near term. And like a really tangible example of this is how long these AI agents can perform form work without having to have humans involved. So there was a recent study like last week from Anthropic that said They've used Opus 4.5 and upwards of 3 to 4 plus hours where it's working on its own to build software and the human doesn't have to be involved. Now that's specifically in software and AI research, but we're starting to see that play out in other forms of knowledge work. And so to be responsible about the use of this, you have to understand that these things are getting better and better at what we call long horizon tasks, which starts to creep into what all of us do. I think I used the example of a marketing campaign recently on either the podcast or the intro class where it's like, hey, we're going to launch this product on February, whatever, 15th, let's go build a marketing program for it. You know, first build the plan, I'll approve it and then like, I want you to do the things. And so let's say it comes back and says, okay, we're going to build a website page, we're going to build form, we're going to write a three part email nurturing sequence, we're going to, you know, define the person we're doing. It's like, that sounds great, go do it. And in theory, there's no reason Claude couldn't come up with the first drafts of all of that stuff. And so if you're going to be responsible about this, you have to accept that and you have to be able to explain to the people on your staff who are currently doing that work, like, hey, we're going to start shifting some things. So it all starts with understanding of what's possible and then from there you can be responsible about how you're going to integrate it in, in a safe way to your organization.

10:20

Speaker B

Yeah. And I think, you know, even from a year ago, as adoption grows in companies and enterprises, just training and onboarding and education is such a critical piece to make sure that you might understand it. But to the people that you're using imploring to use AI, do they understand it right? Number four, when people ask which models or platforms this is a long question they should be using Beyond ChatGPT, how do you recommend they evaluate options without chasing every new release? So here's my question to you. Since we've talked about this a lot lately, so much of it is like a personal preference. So like what your company will approve or what will they pay for. So what are some guardrails? I guess when you're considering excluding tools rather than deciding which ones, how do you decide which ones not to think about?

12:49

Speaker A

Yeah, I mean, part of this is probably just the guidance of your organization. Whatever the IT and legal department have decided, the operations team, like what they're going to give you access to in terms of like chasing things. So again, we'll assume this is within a business environment where some platform is being provided and, but then you have all these other interesting things that are going on that you could be experimenting with. So my general guidance on this is to focus on getting really good at a platform. There might be times when that platform isn't going to be the right fit anymore. So, I don't know. Let's say you're using Microsoft Copilot internally and you realize it has limitations like that. You know that in your personal ChatGPT account, it can do more than what your Copilot is doing or Gemini or Anthropic. So that's when you, when you have specific use cases that fall outside of the capabilities of the platform or model that you're commonly using, that would be a good reason to go and explore something else. Go see if Claude can help you with something that's been not possible with your current platform. But when you start to think about all the other exciting technologies in image, voice, video generation, app developing, there's so many tools, so many exciting things to build. I think you just got to be realistic about whether, whether or not that's part of your role and your own career path. There's, there's a bunch of tools I would love to be testing every day that I do not have time to test. And so even our Gen App review series that we do as part of AI Academy, you know, our team is doing that like a lot of Claire and Mike right now, and they're getting to experiment with these tools that I would love to have time to do, but I don't. And so I have to kind of just watch the 15, 20 minute review that they do of it because I can't get an experiment with it. But at the same time, I am creating enormous value by focusing my usages on ChatGPT and Gemini and integrating it into everything I do and into my workflows. And so I'm okay with that. Like, it's enough for me to just get really, really good and spend like 90% of my time in those two platforms and not worry about the fact that I'm not getting to test everything else because that's not the key value I create in the organization. It's not the role I'm playing is to be the one experimenting. My job is to maximize the use of the platforms that's going to account for 80 to 90% of usage and value creation within our company.

13:38

Speaker B

Yep. So I'm down this Claude rabbit hole right now, having a good time.

16:01

Speaker A

Are you, Are you using this? I know Mike's telling me every day new things he's doing with Claude and it's like, I don't even know. I don't even understand what you're doing. I obviously live this stuff and there's some things Mike's trying with it and I'M like, man, you gotta show me that I don't really even comprehend what you just said you're doing right.

16:05

Speaker B

Yeah, Jeremy and I talked about that this morning. He and Mike, we talked about Claude code and it's like, oh, what an opportunity for us. So I'm excited to go test it out, but I'm like, run it by Tracy first before we connect anything.

16:20

Speaker A

Yes, for sure, Definitely.

16:30

Speaker B

Okay, number five, looking ahead, do you expect AI platforms to consolidate or will most professionals end up working across multiple tools? And how should people think about what they're paying for as models keep changing? I know you just kind of addressed a little bit of that.

16:32

Speaker A

Yeah, I don't know about the consolidation. I mean it seems pretty apparent that Google, Microsoft, OpenAI, anthropic to a degree, are going to remain major players. And obviously the dominant user right now in corporates at least is Probably Microsoft and ChatGPT SMBs. I'm guessing Google Plays a probably has a bigger market share. I actually don't know exactly what the market share is, but our own data would tell us that that's pretty logical. I don't know how the consolidation would work. I think you're just going to have these three to five major providers. Two of them are probably going to be dominant in terms of the market share, but it'll probably look like the cloud world, I guess, where you have aws, Google Cloud and Microsoft Azure. And I think you're probably going to have something like that where there's the two major players and then there's a third that maybe keeps coming up and taking some market share and then there's going to be a collection of long tail platforms that have specific uses. Like if I think of a Harvey in the legal industry, but Harvey doesn't build their own models, they're built on top of probably OpenAI's APIs. So I don't know about consolidation. I do think that most people will probably have their core platform and let's say it's an 8020 thing where like 80% of your usage is going to be in your primary platform, be IT copilot, Gemini, ChatGPT and then 20% of your usage is going to fall into a collection of other tools. So that's certainly how it is for me. Like 80% easily is ChatGPT and Gemini. But then I dabble in lovable and some of these other applications that maybe make up 10, 20% of my other usage of AI tools.

16:46

Speaker B

Okay, number six, from a government governance and security standpoint, what are your Thoughts on organizations building or hosting tailored AI systems on their own infrastructure instead of relying on third party platforms?

18:31

Speaker A

I think in some industries and organizations it's essential that they're building more internal systems that are more walled off, they can control better. From a safety risk compliance standpoint I could see that being essential. So I think that that's just always going to continue to be the play. I think for a lot of other like, especially small businesses being able to just get up and running, you know, instantly on a platform like Gemini or ChatGPT, just out of the box is a really intriguing thing. And then getting access to all the updates. What I've seen oftentimes with organizations who do choose to build through say the OP, the APIs of like an OpenAI or a Google is the capabilities often lag behind what you can go get with the out of the box product. And that's kind of inherent because they're going to build it on a previous generation model which is going to be outdated by the time they get the APIs set up, they're going to put in restrictions on it. So it limits some of the usage of things they can do with it. And so that's just a trade off. When you want to reduce risk, make it safer to use internally, it often comes with a reduction of the features and capabilities of the platform you're going to have access to. So it just seems like that's just how this is going to play out. Much like any other technology last 20 years has played out. As it gets more controlled by it, there's just going to be more restrictions.

18:45

Speaker B

Yep. Number seven, many clients worry that sharing proprietary or unpublished content with AI means it effectively becomes public. Is that a valid concern today and how should organizations address it? I know we get asked this every single class, every single time we do this. Has anything changed?

20:07

Speaker A

No, not really. I always, you know, tell people, check with your attorneys and you know, make sure your the terms of use that you've agreed to protect you from this. But if you're a business account with any of the major platforms, AI companies, they're going to build in that they're not going to train on your data and things like that. So again you can't make a blank blanket statement and say yes, it's safe, don't worry about it, go look at the tools that you're using and see what you're agreeing to and then make sure you're staying up on any changes to those. If you are using a free product, just assume whatever you put in could be used in training data. But again, it's not like if you upload some information, that exact thing gets thrown into a knowledge base that ends up being able to be searched by your competitors a year from now. It's not how this works. It's just training data that goes in and it learns all kinds of things and it doesn't call recall a specific document you gave it per se. So yeah, I mean, I think people's concerns around this are probably overblown in most cases. But that does not mean you shouldn't take a precautionary approach to putting in sensitive information. I do know that there's like I've, I've spent some time with law firms specifically, you know, on the IP side where you know, anything related to patents, there's just no way they would ever put stuff like that in any AI. So there's always caveats to this. So I would say overall probably don't need to be as caution, caution, cautionary as we are, but we should still do our homework and make sure we and our legal teams are confident that whatever we're putting in is okay to be put in.

20:26

Speaker B

And there's different degrees too. You know, you've talked before about you've put a lot of company data in some of these tools because the output and what you're receiving is greater than the risk for you. But then we talk about like we'd never put customer data in there. Even if it says we won't train on this, we're not sharing any of our customer data in any of this.

22:11

Speaker A

Yes. Yeah. And I mean, I guess unless you had like a proprietary internal model that isn't connected to the cloud and it's like staying on, you know, in your server and things like that, then that might be a different story. Or you know, I guess if you think about like HubSpot has AI that has access to customer data and it's like baked into our CRM. Like again there's always these sort of caveats and footnotes to like every decision that's made. But I think that's where your generative AI policies become so critical is that everyone on your team is clear with whatever your organization's policies are and they know to follow those.

22:29

Speaker B

Okay, number eight, as AI reshapes products and decisions, trust in data becomes foundational. How should organizations actively signal trust in both their data practices and AI driven outputs?

23:07

Speaker A

Transparency, I mean this is a pretty straightforward one. I think it's have your generative AI policies, your responsible AI principles clearly documented. Make sure they're infused into the training programs within your company, that they're not just words, you know, on a screen that are actually lived within the organization. And then figure out which elements of that need to be shared publicly and you need to be transparent about. So, you know, I'm not a big proponent of, like, every single post you put up or email you send needs to say this. You know, I used AI in these three ways to do this thing. Like, I think we've moved past that in most cases. But I think if, like, you know, the example I always give is, like, if authenticity matters, then you might need to disclose whether AI was used or not. So for my Exec AI newsletter that I do every Sunday, I write that I have zero AI usage in that. And I do put at the bottom, this was 100% written by. By me. Because I think, especially for the editorial part, that's what people are signing up for. They want to hear from me, not my AI. And so I don't use it in the editorial writing. Same with my LinkedIn post. Now, I don't tag every LinkedIn post with 100% written by me, but I'm clear in saying that, that, like, I write all my LinkedIn posts now, I'm a writer by trade, so I also don't. I'm not saying that every CEO or every leader should follow how I do it. It's what I do for a living. So I'm comfortable writing and I enjoy writing some. Some people don't. And so if you're using AI in different ways, like, so it's very personal thing. It's a subjective thing. There's no, like, true, hard, fast rules as to what should and should not be AI assisted. But everybody's got to figure that out for themselves. And my main thing is if people expect authenticity and expect it to be your voice, you got to be really careful about how much you're using AI in that process.

23:20

Speaker B

I wrote an email last week for Macon, and someone was like, was that AI? And I was like, excuse me. No, that was me. That was me sitting down, watching tv. And all of a sudden I was like, oh, my gosh, I've got a great idea.

25:16

Speaker A

Yeah, but that's the joy of it. Like, you and I enjoy that creative process of writing. We like coming up with the idea and getting, you know, being clever and writing three versions of it and things like that. Some people, it's not their thing. Right, I get it. So there's no, again, there's no right or wrong Answer necessarily, Necessarily.

25:27

Speaker B

Okay, number nine, as AI becomes embedded across platforms, how do you see workflows changing inside marketing departments and agencies from creative development to client handoffs, approvals and.

25:46

Speaker A

Feedback loops, completely reinvented. Like, I. The more time I spend on this and the more time we do this internally, I just think that workflows are going to be fundamentally transformed. And I think I shared this recently on a podcast episode where I think maybe when I was talking about the org chart app that I was building and lovable. And as I'm building it, I'm actually thinking about each role in our company as we plan to hire more people. And I'm thinking about the workflows those people perform. And I'm trying to imagine, like, 12 months from now, where's the human's role in those workflows and what's the AI's role? And so as an example, like, we don't have SDRs on staff. So we don't have somebody who, you know, let's say, looks at companies that come to our intro to AI class and then does outreach to them. Like, that's not a fundamental role we have hired for a lot of, like, SaaS, companies will have SDRs that are going through the leads and trying to qualify them and then hand them off to the sales team to close the deals. And so I'm looking at thinking, I don't know that we're ever going to hire an sdr. Like, I don't know that the workflow of an SDR is something that's going to be a human role. Or if we do, it might be one instead of five down the road. Like, you might have a sdr, but that person's job may be more AI agent orchestration than it is doing the actual outreach. It's like managing the SDR bots, in essence, that are automating all of this work. And so we're going to go through the process of developing the SDR workflows ourselves with humans. And my main goal is so I can probably automate it so we don't have to hire a bunch of SDRs. I don't think that's a role that is going to be extremely valuable within our organization. Something similar we're doing with customer service where, you know, traditionally you could just have humans doing that, but the reality is that most people just want a quick answer to something that is pretty predictable. It's going to be one of, you know, a long tail of 300 things they're going to look for. And honestly, like An AI agent's just better at getting them the answers to those things faster. Yeah, and we want to free our humans up to actually get on a call, jump on a zoom, like face to face, solve something. We don't want them doing all these, like, mundane things that aren't fun for anybody. And we're not going to get the answer as fast as I. So when I think about the workflow that goes into customer support, it's like, okay, where is the AI agent going to fit in to where we don't need those things? And so I, I truly am. Just like every workflow we go through, I think over the next 12 months, across every department in our organization, we will have these conversations. It's like, okay, where, Where's AI at now that we can fit it in? Where is it going to be in six months that we don't want to hire for this role because we think AI is going to solve it? And so I, yeah, I, I truly believe, like, and I'm not that I'm an answering specifically for marketing departments, agencies, I guess, because that's the question. But this answer is universal. This is. Every department and every organization should be going through this process of analyzing workflows. And then the other thing, Kathy, is we're just analyzing what we think is an optimal workflow. Like, okay, here's the 12 steps we go through to do that thing. Well, what if that's not the right way to do it? What if there's a more efficient, more innovative way to solve the problem? And so we're using AI to actually say, hey, here's our workflow. How would you make this workflow better then? Where does AI and human fit within a more optimized, more innovative workflow to solve the problem or create the value for the end user, the stakeholder? So, and I think people who can do what I just explained within a company, pick a department level at a, you know, horizontal ops level. People who have the knowledge of AI's capabilities, have the business sense of what a workflow looks like, or being able to work with department leaders to define those. Those are insanely valuable employees. So, like, if what I just explained sounds like you and something you want to do, you're. You got some really good job security for the next, like five to seven years because everybody's going to need to do this. And most companies are going to lag dramatically behind in terms of figuring this out.

25:59

Speaker B

Yeah, it's also interesting, I feel like, you know, from the client handoffs, approvals, feedback loops, AI is just surfacing. Like some things have just been wrong are not efficient for a very long time.

30:13

Speaker A

Yeah.

30:24

Speaker B

You know.

30:24

Speaker A

Yeah. And in some corporations, inefficiency is not only accepted, but sort of like rewarded. Like, I mean, let's be honest, there's some bigger corporations that like slow moving is kind of like the comfort zone for most people and they don't want things to move faster, like be that much more efficient and that. I just don't think that's going to fly moving forward.

30:25

Speaker B

Okay, number 10, for people who've learned plenty of AI theory but want to actually build assistance GPTs or systems, what's the fastest and most effective way to get hands on?

30:50

Speaker A

I would go into your favorite AI platform and ask it how to build one. I mean we have. So if you're an AI Academy member, which is our online education platform, I have a course on how to build a CO X. It's basically how to build an AI assistant. So it like walks you through. So that's available. But you can, you could go and search for this, say, how do I build a GPT? How do I build an AI assistant in Google Gemini? Or just go straight into the Gemini app or chat to me to say, hey, I, here's what I do. I would love to figure out how to build some GPTs to help me be more effective. And it'll guide you. You can literally say like, okay, I really like the idea for this GPT. Can you write a system prompt for me that I can use to train the GPT? It'll do it. So you could to kind of shortcut it. You could use our Jobs GPT. So we'll put a link in the show notes for that and say, hey, my role is this. I'm a partner in a law firm or I'm a HR executive or I'm a head of operations, whatever it is, how could I be using AI? And Jobs GPT will actually like lay out a bunch of ways you could be using it and give you some rationale as to which ones might be most valuable. And then you could say to it, okay, help me prioritize. Like 3 GPTs I could build that are going to create the most efficiency for me based on these tasks you just identified. And it'll go through and do that. So like you can just talk to Jobs GPT about it and it'll help you.

31:01

Speaker B

The first one I built actually had two monitors and two instances of ChatGPT opened up. One I'm saying, what's the prompt? What's questions you know, it was like, basically giving me all the answers to just basically cut and paste into it. And I edited it because a lot of it just didn't make sense for me. But I'm like, okay, let's give this a spin and see how it goes. Went back in, edited it, made it more, made it better. There's a lot of just trial and error with those with that first one.

32:25

Speaker A

Yeah. And one of the things I've found really helpful, and I've mentioned this on some recent episodes, is when I give my prompts and I'm working on something like this, I will say specifically to ChatGPT or Gemini, let's do this step by step together. And then what it does is it doesn't just like, like vomit like 5,000 words. And like, here's your thing. It's like, no, I want to, like, go through a process. I want to make decisions. I want you to help me do it. So if you just say, let's do this step by step, the. The AI assistant will actually like, okay, sounds good. Step one, let's do this. What do you think? That sounds great. Okay, step two, and you can just go back and forth with it. It's the best way to use any of your favorite AI assistants as an advisor and a consultant is just like, tell it to do step by step with you.

32:49

Speaker B

Yep. Number 11, what's a decision organizations think can be automated, but absolutely shouldn't be once AI scales.

33:34

Speaker A

Wow, that's a good one. I've seen a lot around HR where, you know, AI is getting integrated into the HR process of reviewing resumes, prioritizing candidates, things like that. That feels like we really need the human loop. I do think AI can be used a lot to streamline a lot of things in HR and the talent side of businesses, but it should only be to free up humans, to spend more time with humans, to, like, figure out who the right humans are to be part of an organization. So I feel like it's scaling, you know, our ability to do HR better, but we shouldn't remove the human. I think customer success is a similar way. I explained our customer support concepts earlier, but all of that, all that automation we're looking to build in is to free up the human. So when someone says, I want to talk to a human, we have humans available to do that. And, like, so everything we're doing with AI is to create a more human organization where when the human touch or authenticity matters, that's what we're here for. And so when decisions can't be made and like the AI can simulate empathy, but it's not actually empathetic. Like we need humans with empathy and we need humans with creativity and things like that. That's the stuff you can't just scale by, you know, spending another 20 bucks a month or buying another few licenses or doing some AI training. Like that's not going to solve that. So yeah, I don't know. I mean, I think those core decisions that are fundamental to the organization, the human needs to be in the loop if, you know, certainly if not the final decision maker. But I do think that more and more things that right now, today we think, oh, I can't decide that, it can't just take those actions without us. I think a lot of those barriers are going to come down for some like early adopters and innovators who are going to get more and more comfortable with AI's ability to, you know, make some decisions that are not critical. There's always that analogy. I think Jeff Bezos always had like the one door, two door problem at Amazon. It's like, if it's a one door thing, like we make this decision, there's no turning back. Like that's gotta be humans. If it's a two door decision, like, hey, we make the decision, it's not right. Like we can just walk back out the door and like go make a different decision and come back in. I think a lot of companies will maybe look at it in that way and say, all right, like if it's the two door thing and we can backtrack on this and we can fix it, then AI maybe plays a greater role. If this is like fundamental to organization, there is no turning back. You are not relying on AI to make that decision.

33:43

Speaker B

Yeah. Okay, number 12, what's one early sign that an organization is scaling AI faster than its ability to govern it?

36:12

Speaker A

Yeah, I'm just trying to think like in our own organization, if I've run into this or not. Yeah, I think it's more instinctual for us at this point, like as new now, obviously we're pretty much at the frontiers of this. Like we're seeing and experimenting with technology faster than other people, most other organizations. And so in some cases there's a level of, of risk we're willing to take on because we're trying to kind of stay out in the frontiers and see this. But yeah, I don't know, I think it could be a people thing. It could be that people have too much fear and anxiety and if you were to like poll your People how they feel about AI and the sentiment is not overall good, then maybe you're just moving faster than your people permit and you can't. It's not a human centered way to do it. Is it like it feels like you're leaving people behind or they're telling you they feel like they're being left behind or not clear exactly what's going on? I would say it's probably that. It's probably more qualitative. It's like being in tune with what's going on in the organization, how people are feeling about it. Are they understanding why we're doing it? Do they understand the technology we're using? Are they, oh, this actually be a good quantitative one. Utilization rates of the AI tech you're giving them. So let's say you bought 300 ChatGPT licenses and 20% of your staff are weekly active users. You're probably moving too fast, like they don't get it. They're not actively using the technology you're giving them to do their job better. So yeah, actually I guess as I'm talking and thinking out loud here, utilization rate of the tech you give them is maybe the greatest indicator. And so if you're continuing to move and move and you still only got like 20, 25% of the staff who are even trying it, or maybe it's daily active users you should be monitoring, then you got a problem and you gotta, you know, reskill. So yeah, I think it's probably a mix of qualitative where you survey them and find out how they feel about it and what they're thinking about it. And then there's the quantitative monitoring utilization of the technology that you're giving them.

36:21

Speaker B

And the bottom line of both of those things is education.

38:26

Speaker A

Yes, for sure.

38:28

Speaker B

Number 13. As AI becomes more embedded, what new skills do you think will matter most for leaders, not just practitioners? And where do you, where do leaders.

38:32

Speaker A

Need to get uncomfortable orchestration of all the AI technology? This is when I'm starting to feel myself already. I saw a Business Insider article, Kinsey. It was one of the consulting firms. The CEO was like, yeah, we have 50,000 employees and 20,000 more AI agents or something like that. And I was like, what? Like, how does that even work? Like, what are you talking about? But I do think that there's this challenge as leaders where we have to start orchestrating humans and AI together. And no one has gone to business school for that. Like there's. There is no leader who has been trained to live in an environment where you have these AI agents that are capable of doing human tasks at varying levels, varying levels of autonomy. And you have to not only envision your organizational structure with agents and humans, but then you have to manage the orchestration of all that, the risks of it, the limitations of it. That is like an entirely new skill and vision needed by leadership that nobody really has. And there's no other way to do it but be uncomfortable. I'm in the middle of this myself and I'm trying to figure out what this looks like and how we'll integrate these elements. And when do we start putting AI agents on the org chart versus just embedded parts of workflows. Like I'm modeling this right now. I have variations of how this could look. And I think for most leaders, it's knowing to even be thinking about these things and asking these questions and then developing systems to manage it. So let's say you're like that example consulting firm and you know, maybe you have a hundred GPTs or agents that are in use around the company. How are you tracking that? Like, where is an agent tracked, who's using it? And because it's not all going to live in your ChatGPT license, like you might have lovable an agent that's building apps for your marketing team or your sales team, it's like, okay, where's the visibility for that? These things all don't talk to each other. How do we unify all this information about all these different AI tools that are being used and who's testing clawed on their computers and like, things like that? So yeah, there's all kinds of operational issues around this. But you know, I think that that's, that's a big one, is the technology side. And then the human management is just a, a whole nother element that leaders have to be familiar with. Because no matter how much the technology makes possible, there's a whole bunch of humans who don't love this stuff and have some fear, certainly anxiety. And you can't underestimate the friction people can cause to technological change. And so as leaders, we have to figure out how to navigate this. And that's uncomfortable and unclear. So again, I'm going through this myself. I talk to leaders every day who are trying to figure these things out. And there's no books written or blueprints to turn to for how to do it.

38:41

Speaker B

Yep. Okay, number 14, you've talked about the idea of an AI output verification manager. How credible is this as an emerging role? And could verification itself become a scalable.

41:42

Speaker A

Business model for research Firms and media firms it is today. Like, I mean, either it is a skill or role of an existing person. So like your editors, for example, research assistants, they're doing this. Like they are already verifying these things. And so I do think that, and I guess on brand side, like if you're publishing stuff, you should be doing it too. But yeah, those are the ones. Media research and you know, marketing teams that are publishing content. I think it's essential. And so again, even if it's not a role you have defined, like we put a job up for this person, it, it has to be part of the responsibilities of someone who is in the, the workflow chain that publishes the final pieces of things. So another one I guess would be like law firms. I've seen plenty of instances where law firms have used AI to create briefings and documents they submit to judges where they didn't verify the information themselves, the citations. So anywhere where you have to verify data, statistics, names, anything like that, any kind of facts that's going out publicly or being used internally to make decisions, someone has to be verifying if AI has been used in the process. So at minimum it's a responsibility of anyone in that workflow. And I could certainly see in some of those other instances I talked about, like, you know, research firms and media firms where it might just be a role. It's someone who gets really good at working with the AI models and knows the ins and outs and the hallucinations and things to watch for. And if there's enough capacity needed to justify a full time role, I could see it maybe being that in a firm like the ones I mentioned, do.

41:55

Speaker B

You think it would be easier to take someone internally who has the institutional knowledge and the experience to become that person? Or do you think someone who actually knows how to use the models is a better starting point?

43:42

Speaker A

I think if you already have those people on staff, certainly it could be. But for us, we're unique. So sometimes I have to get outside of like our bubble at SmartRx because we hire a lot of communications people, you know, journalism people trained in journalism. So for us it's a natural thing. And we just like, of course you're going to verify facts like who would ever publish something that you didn't? But then you realize a lot of people don't know that these models hallucinate the way they do. Going back to an earlier question about, you know, how much do we really need to understand how they work and how the models are built and things like that. So if you don't understand the limitations and the hallucinations. You might not even know to have this person. But if you have editors internally, if you have people who would check citations regardless of where they came from, then that person's just evolving their role in doing these things. But if you don't have that person and you plan to start creating more content, doing more podcasts, more webinars, more research reports, more, you know, downloads as part of your marketing funnel, because AI can all of a sudden help you create all that stuff and you haven't solved for verification, then, yes, you need someone to do that and you might have to go higher for it.

43:53

Speaker B

Yeah, I still, even with your newsletter every week, I still cut and paste prop, you know, proper nouns and quotes into something to make sure that it's actually correct. And not that I don't trust you, but, I mean, I've, I've clicked.

45:03

Speaker A

It's just what we do.

45:17

Speaker B

It's what we do.

45:17

Speaker A

Verify every fact.

45:18

Speaker B

Yann Lecun. I've checked his name 5,000 times, but I still do.

45:19

Speaker A

Is there two N's in the first name or the last name?

45:22

Speaker B

Exactly.

45:24

Speaker A

I know. I did the same thing.

45:26

Speaker B

Okay, so there were 14 questions. I've got one bonus question. So our friend Amy Martin, I'm like, who's texting me at 6am and it was Amy. And I'm like. And I texted her back at like 6:02. So she is looking for a good book to dive into, preferably AI or anything on advanced technology. So any current recommendations or anything like OG books that you would just recommend to this audience?

45:27

Speaker A

Wow. Yeah. So I've got that whole stack of books out on the shelf in the office. I don't know. Some of my favorites, I guess, that come to mind would be like, Genius Makers is one I always recommend, just for context. It's from like 2020, 2021, but I always loved it. Algorithmic Leader by Mike Walsh, who was a keynote at Macon in 2024, is a really classic. It's pre Genai. Both of those are pre Genai, but they're really good. AI Driven Leader is. Is that. Yeah. AI Driven Leader by Jeff woods, who was a keynote this year, is like one of our current favorites. That's actually the book club book for Academy.

45:49

Speaker B

Right?

46:27

Speaker A

Or do we announce that yet? Or did I just preemptively announce?

46:27

Speaker B

Well, per usual, you just announced it on the podcast before anyone else knew about it.

46:31

Speaker A

So we have AI Academy. We're going to launch a book club, and that's going to be the first book. So yes, I guess I just pre. Pre open now.

46:34

Speaker B

Guess we're. Guess reluctant.

46:41

Speaker A

Yeah, yeah, we're doing that. What's the one with Karen Howe? Empire of AI yeah, if like you're more interested in not the downside of all this, but the reality of all of this and like the concern around consolidation of power and a few labs basically defi deciding the future of humanity and who are the people behind those labs. Empire of AI is a really good read. It was a New York Times bestseller. Yeah. So those are, I don't know, a few that kind of just jump top of Mind co Intelligence. Yeah, that's a good one.

46:42

Speaker B

Ethan Malik.

47:17

Speaker A

Yep. I mean I'm reading. I'm reading some different books right now. I actually have a book that's not AI right now, which is weird. I haven't read a non AI book in like five years. But yeah, I'm gonna. Those are. Those are some good ones.

47:18

Speaker B

Okay, so I'm going to close with. We have five upcoming free virtual educational classes, webinars and events coming up. So I'm just gonna run through this list really quick. We'll include the links in the show notes. If you go to artificialintelligenceshow.com and click on show Notes, you'll see them in episode 182's post. 192's post rather. So January 22nd, five essential steps to scaling AI. So if you're listening to this on Thursday, it actually is at noon today. Noon Eastern Today, January 27, 2026 Marketing Talent AI Impact Report that Mike put together. There's a webinar corresponding with that. And you go look at the report. The same day, 29th we have how AI native research is Fundamentally Changing Business decisions with a great company, ReadingMinds AI. Our next intro class is February 10th. Our AI for agency summit is February 12th. Again, all of those are free. And then we have an AI for Departments series coming up in later February. But we'll talk about that next time.

47:31

Speaker A

And you can go on the webinar fronts. There's SmartRx AI webinars. We'll put. Like Kathy said, we'll put links to all of these in the show notes. But you can go on SmartRx AI and get access to all this information as well. All right, Kathy, that was good. The questions are always my favorite part of doing these classes. So we will have. So like Kathy said, we have a scaling AI coming up on Thursday the 22nd. We will do another AI answers special episode next week. Answering questions from that class that we don't get to during the live session. So thank you again to Google Cloud for partnering with us on this series. And we will be back with episode 193, our regular weekly episode with me and Mike next week.

48:30

Speaker B

Sounds great. Thanks Paul.

49:13

Speaker A

Thank you. Thanks for listening to AI answers. To keep learning, visit SmarterX AI, where you'll find on demand courses, upcoming classes, and practical resources to guide your AI journey. And if you've got a question for a future episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions about AI.

49:14