The Artificial Intelligence Show

#181: AI Answers - Measuring AI Skills, Aligning Leaders, AI Literacy Frameworks, Overcoming Resistance & Preparing for AI Agents

50 min
Nov 20, 20255 months ago
Listen to Episode
Summary

This AI Answers episode addresses questions from SmartRx's Scaling AI class, covering AI literacy frameworks, employee training resistance, ROI measurement, and enterprise readiness for AI agents. The hosts emphasize the urgency for organizations to develop AI capabilities while providing practical guidance on implementation strategies and overcoming organizational resistance.

Insights
  • AI literacy will become a performance requirement by 2026, with companies tracking AI competency through certificates, activities, and measurable business impact rather than just training completion
  • Organizations should focus on low-risk, high-impact AI use cases that don't require data integration while waiting for broader data governance solutions to be implemented
  • Revenue per employee is emerging as a key universal KPI for measuring AI impact across organizations, with expectations that this metric should increase as AI drives efficiency gains
  • AI agents are currently overhyped in terms of autonomy - they can perform specific tasks but still require significant human oversight and cannot replace entire job functions yet
  • The shift from traditional SEO to 'GEO' (Generative Engine Optimization) requires content diversification across multiple platforms to ensure visibility in AI-powered search results
Trends
AI literacy becoming mandatory for employee performance reviews and career advancementShift from guidelines to formal AI policies in regulated industriesRevenue per employee emerging as standard AI ROI metricAI agents moving from general tools to industry-specific applicationsFinancial services becoming the next major AI agent deployment areaTraditional SEO evolving into Generative Engine Optimization (GEO)PR and earned media gaining renewed importance for AI search visibilityCustom fine-tuned AI agents outperforming general-purpose agentsJob displacement acceleration expected in 3-6 month timeframeVisual AI capabilities advancing rapidly for document and whiteboard analysis
Quotes
"There's way more people resistant to AI than there are excited about. It has been my experience in companies and so we need to do our part to try and bring those people along."
Paul Roetzer
"If you know that someone who invests the time in getting the certificates and using the tools every day has a 10%, 20%, 30% greater impact on efficiency, productivity, revenue growth, whatever those metrics are, how can you justify keeping the employees who refuse to do it?"
Paul Roetzer
"AI agents are basically just AI systems that can take actions to achieve a goal you give them or a project you ask them to complete so they can perform deep research."
Paul Roetzer
"Don't wait for that all to happen to get benefits from generative AI. There's too many times where I've gone into organizations and they're like, oh, you know, the IT team's working on this or the data team's working on that."
Paul Roetzer
"I really think that there's going to be a far greater impact on jobs than people realize in the near term. And I mean that with like three to six months."
Paul Roetzer
Full Transcript
2 Speakers
Speaker A

There's way more people resistant to AI than there are excited about. It has been my experience in companies and so we need to do our part to try and bring those people along. Welcome to AI Answers, a special Q and A series from the Artificial Intelligence Show. I'm Paul Raetzer, founder and CEO of SmartRx and marketing AI institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of AI. But we never have enough time to get to all of them. So we created the AI Answers series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing. Whether you're just starting your AI journey or already putting it to work in your organization, these are the practical insights, use cases and strategies you need to grow smarter. Let's explore AI together. Welcome to episode 181 of the Artificial Intelligence Show. I am your co. I'm your host, actually. All right, sir. Along with my co host Katherine Phillips. I'm just telling Kathy it feels like a Friday because we have like a.

0:00

Speaker B

Fun team lunch today.

1:12

Speaker A

We do. And it's like next week's the holiday and I think my mind is already like just done. And I'm doing three podcasts this week. So I'm like, oh, we're gonna make it through though. I promise we we're gonna do this. This is a special edition of the podcast, so tuning in and you're wondering why on a Thursday there's a new episode of the Artificial Intelligence show because we usually drop the weekly On Tuesdays we have a series called AI Answers. It is presented by Google Cloud. This is the 9th edition of this we are doing. The series is based on questions from our monthly Intro to AI and Scaling AI classes that Kathy and I do together. Oh, I forgot to say, Kathy is our Chief marketing officer at SmartRx if I didn't give the full proper introduction. So thanks to Google Cloud for sponsoring this series as part of our AI Literacy project. We have an amazing partnership with the Google Cloud marketing team that's been going strong for a year now. We're looking forward to a bunch of exciting things next year, but they sponsor the AI Pod Answers podcast series. They're also our partner for the Intro to AI and the Scaling AI classes that we do free each month. A collection of AI blueprints that are going to be coming out soon. And then we team up on the Marketing AI Industry Council so you can learn more about google cloud@cloud.google.com so, Kathy, this is if I'm getting this information correct, we did a November 14th scaling AI. Does that sound right? Friday, November 14th, we did a scaling AI class.

1:14

Speaker B

That is correct.

2:41

Speaker A

And these are questions from that class that we didn't get to during it. Maybe a few that we did just to reiterate them for a larger audience. So we are recording this on Wednesday, November 19th. So this will drop on November 20th and then we'll be back for our regular weekly episode for episode 182. All right, if I didn't confuse everybody yet, Kathy, let's recap. This is AI Answers. It is a special edition of our artificial intelligence show podcast that Kathy and I host together. And I'm going to turn it over to Kathy to bring us in to that first question of the day.

2:42

Speaker B

That's a great intro.

3:17

Speaker A

Thanks. All over the place.

3:19

Speaker B

Yeah. Hopefully by the ninth time we're doing this, people know to expect this every few Thursdays, but this listeners each week.

3:22

Speaker A

We do have so many new listeners. They may have never been here.

3:29

Speaker B

Yeah. So if you are a new listener, if you want to go back and listen to past episodes, obviously they're very timely when Mike and Paul do the Tuesdays.

3:31

Speaker A

We are both on our game today, huh?

3:41

Speaker B

But if you look at the AI answers questions, what those are are questions from all these classes that we do and those are more evergreen than I think that than the week to week episode. So if you enjoy this, there are eight more episodes you can go back and listen to and listen to in the future.

3:45

Speaker A

And we do try and mix like we like our team, Claire and Kathy do a good job of trying. Like sometimes we'll do similar questions because we get a lot of similar questions as we do these classes. But we do try and introduce like new topics each time and try and pick some of the questions that maybe we haven't previously answered. So it's not completely redundant if you do go back and listen to the other eight. So. And again, like you can scan the timestamps and just pick the ones that you want to hear the answers to that are most relevant to you if you want.

3:59

Speaker B

The other thing is I may take some creative license every now and then that if a question is very similar, like a legal question, we get legal questions all the time. I try to put a different spin on it based on some news or something just to make sure that it is relevant and not completely redundant to things we've asked before. So very quickly we do the episode we get all the questions, we export the questions from Zoom. Claire runs them through AI and through listening to the episode and pulling out some key questions, removes redundancies, prioritizes them, puts them in a format where they segue from one to the next. I go through, do a quick once over, get them in a good spot, and then here we are today.

4:25

Speaker A

And then just from an answering perspective, we do this like we would do it live. I don't see the questions when we do them live, we just kind of go off the cuff. And so we actually do the same thing here. Kathy and Claire put together this doc, but I actually don't pre read these questions. So if I don't have a great answer, I will say that and we'll say like, we'll look into it a little bit more, get more resources. But yeah, so this is just sort of a, you know, for me it's just kind of a live thing. We jump in and we record it and we move on. So do my best.

5:04

Speaker B

All right, well, let's jump. Let's get started. Number one, have any credible AI literacy or competency frameworks emerge that help leaders assess and track employees AI skills over time?

5:33

Speaker A

That's an interesting one. I don't know that I've, I've really seen a great model of this yet. It's definitely something that we think about internally. I have seen more examples of people saying it's going to be part of performance reviews where there is some expectation level and whether it's milestone or certificate based, like we want you to achieve these, you know, five certificates or want you to go through these programs or it's more activity based, like we want you to build at least 3 GPTs for your job, like things like that. So we are starting to hear about those examples where more and more companies are starting to make AI literacy a requirement as part of your professional development program and starting to provide more resources to enable that to be possible. So I do think 2026, we'll probably start to see a lot more about how organizations are not only enabling AI literacy but tracking it and then determining, you know, performance based pay and determining promotions based on, you know, how not only literate but how competent you are with AI and how much impact you're having with your use of AI.

5:44

Speaker B

Have you given any thought to how you might assess your team? All of us.

6:50

Speaker A

That's a good thought. No, I, I don't know that even outside of like we look at skills and traits from an individual perspective and something we'll probably Do a lot more of moving into next year now that we're scaling our team. So integrating AI capabilities, AI literacy into the skills and traits that are required will become an expectation. I could see us having some more activity based things. Certainly there's certificates in the trainings. We have this internally now. We want everybody to go through this. And honestly, we'll probably use our new AI Academy learning management system ourselves, function as a business account within there and set required learning journeys and set milestones for our team and then monitor if people are actually achieving those things. But I do think that for us, a lot of it really comes down to the impact you're having with the tools. How are you using them to improve workflows? What innovations are you driving? What growth are you contributing to for the company overall based on your use of them? So I think that's kind of how we'll start to look at it. But I do not have a blueprint in place for that.

6:55

Speaker B

Yeah, and actually I talked to Mike earlier this week and I said I would love to spend, you know, an hour with him going through like, okay, here's my, here's what I'm doing for Macon, for example, with my marketing plan and here's the places I have AI integrated and everything. Help me, help me figure out what am I missing. Are there any like big picture things? Because I'm so close to it that he could just be like, oh my God, you know, you can try this or try this. And I'm excited. I know that's a little bit off topic from the question.

7:56

Speaker A

Yeah.

8:19

Speaker B

But working with each other just to surface some new opportunities I think will be really critical.

8:19

Speaker A

And I want to give you like one example. And again, this is more like performance based. You know, like we're looking at the customer support side of AI for Academy or AI Academy by SmartRx. And as we look to scale customer support to thousands and potentially tens of thousands and hundreds of thousands of learners in that system, the integration of an AI agent that can solve, you know, 80% of customer inquiries on demand 24, 7 is like a really fundamental thing that we think is going to be very important. And we, we think it's something we can actually introduce quite quickly. And so that's one of those things where you can look at a team member and say, listen, we want you to be able to help us make this significant impact. And so you going and knowing how to do this and actually creating the agent and testing the agent, that is like, that's a great impact. And so when you look at the year in totality and like oftentimes with our professional reviews, you give someone a chance to tell us your story. Like, what is the year? What impact did you make? What are the big things you worked on? And so be able to have people, this is more qualitative, I would say, like to have, be able to say, like, listen, I actually contributed to the building this agent that does, has this impact on the business. I created this GPT which saved us 300 hours of it. Like, that's the kind of stuff I would love to see. And I think the stories we'd all love to be able to tell as employees and leaders.

8:24

Speaker B

Absolutely. So speaking of training, number two, how important is it for all employees, not just early adopters, to develop basic AI literacy? And what risks do organizations run if some people opt out or lag behind?

9:43

Speaker A

Yeah, I mean, I come at this from quite a biased perspective and I understand that, but I think it is like the most fundamental thing. So the way that AI is going to transform organizations is there's going to be some top down vision and resources provided and organizational structuring to enable it. But a lot of the innovation, a lot of the most impactful use cases are going to come from the bottom up. And that can be from an executive assistant to an intern, to a manager, a director. It can really bubble up. And so the more you empower people with not only knowledge but the tools, the greater chance you have of actually doing this and accelerating. Now, in terms of the risks of people opting out, the risk is they shouldn't be in the company in one to two years because, you know, let's say that you have like 80% adoption. So I don't know, let's just take a team of 100 people and 80 of them are like, all right, we get it, we're in, we're participating in this process. We're using, you know, the AI assistance, we're finding ways to innovate. And then there's the other 20 people who are like, I want nothing to do with this. We'll think about, go back to question one about assessing the impact and assessing people and their AI skills over time. If you know that someone who invests the time in getting the certificates and using the tools every day has a 10%, 20%, 30% greater impact on efficiency, productivity, revenue growth, whatever those metrics are, how can you justify keeping the employees who refuse to do it? And again, this is, I am as human centered approach as anybody to AI. That's just a Reality check. Like, you cannot as a leader, that's.

9:58

Speaker B

Not even AI, Right.

11:36

Speaker A

That's anything, any, any technology that they just refuse to do. So like, you know, back in my agency days, we would go in and do, you know, marketing automation set up and we would advise on strategies like go to market strategies. And you would have the sales reps who refuse to use the CRM, like they're just. Everything is managed on a paper notepad or in Excel and nobody has visibility into it. It's like they might be a good salesperson, but over time, like that just doesn't work. And so I do think that, you know, there's a lot of lessons learned when we look back of just technology transformation, change management. But I think it's going to be more obvious than ever with AI because the potential for it to impact workflows and productivity is so dramatic that the people who refuse to do it are just going to not only be left behind that they're going to start to drag down the team of people who want to grow the company and create those career opportunities.

11:38

Speaker B

Yeah. That leads us to number three. How can leaders articulate the business value of investing in AI literacy when stakeholders aren't yet convinced it matters?

12:34

Speaker A

I'm a big believer in just making it as tangible as possible and talking to people in ways that make it relevant to them. And maybe it's like the communications background, Kathy, that you and I have, but I'm not someone who feels like I said it as the CEO, so it matters to you. So go figure it out. Like, I, I know that there's probably a lot of leaders who take that approach. I don't think that that's the right approach oftentimes. So what I mean by all this is if you like, let's say you have salespeople or someone in HR or finance that just wants nothing to do with this, show them a GPT built to help them do the thing they don't like to do. You don't like doing these reports every week? Let me show you how you can do them in 10 minutes instead of three hours. Once you show them something that's personally relevant to them and you help them then connect the dots of how they could be using it to do other things. And maybe it is just starting with the things they don't really find fulfilling in their job job and then say, you know, make me a list of the five things you wish you had time to do each month and then like, help them move things from the bucket of stuff. I'm Wasting my time on to things I want to be doing more time on. And once you do that, you, you know, it starts to matter. Or what are the KPIs that impact their own raises and performance bonuses and show them how AI can help them achieve those KPIs more quickly, you know, surpass them. So you have to make AI personal to people, especially if they're just not those early adopters who are going to race forward and try everything. There's way more people resistant to AI than there are excited about. It has been my experience in companies and so we need to do our part to try and bring those people along.

12:44

Speaker B

Definitely. Number four, what's the most effective way to help senior executives, especially in those regulated industries, understand the risk of not having AI literacy guardrails or gen AI guidelines in place? Place?

14:27

Speaker A

Well, I mean from a risk perspective, you could show them how it goes wrong if they make a mistake with it. But I don't know, I mean I think show them what the alternative looks like. So like I was working on this morning, I was working on a concept of like building a GPT Google GEM board. And what I mean by that is like I don't have a board for my company. And so the idea I had was like, what if I talk to Gemini 3 like new model that I've been pretty impressed with in the first 12 hours I've had it and said hey, I want to what would an ideal structure of a board be for a company of our size? Like who would be on that board? What would be their background? And then work with it develops like, okay, that's the prototypical five person board. Now go create those Personas for me. Now I'm going to train you on the Personas you created and we're going to have an on demand board for me to talk to with those five backgrounds at any point. So like if you take that example and you give that to a small midsize business CEO who maybe doesn't have a board or a really fast growing startup company that like maybe they want some other opinions and things like that and you just show them this really practical way to use AI. Now all of a sudden it's like, oh my gosh, okay, I understand now the risk of not knowing this, like I'm going to, my company is going to fall behind if I don't know kinds of things. And so to find those like really innovative examples or use cases, you have to understand it. And so again, it almost goes back to the previous question of you have to make it personal for them. This is what could go wrong. If you don't deeply understand this yourself. This is what you're missing out on doing. If you don't understand this. Here's what happens if all the people that report to you understand it and you don't. So you have to know how to talk to people and what their triggers are that's going to get them to care more, I guess, is how I think about it.

14:41

Speaker B

Okay, which kind of leads into number five when questions start. Draft. When companies start drafting responsible AI guidance, do you recommend formal policies, more flexible guidelines or something in between? And we talked about this a little bit in scaling, like is there a name that resonates more and is it a more important word to say guidelines versus or policies versus, you know what I mean? We're trying to figure out like what's giving them. How are you enabling your company and your team to do things but knowing that it's super critical that they follow the rules?

16:34

Speaker A

Yeah, I think this came up on the actual class as like a policies versus guardrails question. Something along those lines. And I don't honestly remember exactly what I said then, but like my basic premise here is we've seen this done many ways. Sometimes they are formal policies because they have sign off on the executive team and these are literally what you're required to do. Other times they can be more informal and they are general guidelines. I think that depending on how you're using AI, like so for example, what is allowed to be uploaded into ChatGPT Gemini Claude, that better be a policy because there is higher risk associated with that. It needs to be a policy, not a guideline. A guideline could be more along the lines of when do you disclose your use of generative AI where there's probably some like fuzzy middle where it's like, okay, we're allowing you to have some autonomy yourself of like making these decisions. But here is the general guideline as to when you should or should not disclose. And so I do think that while it might be faster to get going with a bunch of guidelines, especially if it's like, let's say it's a marketing AI council and you don't have a full blown corporate council and you're trying to just get things going within the marketing department, you may not actually have the authority to set like formal policies around some of the things that would be required to be in these. But that's where it and legal and the C suite might need to get involved. So you may just set Some basic guidelines so people are being safe and doing things in a responsible way. So part of it is what your authority is and what the kind of the charter of a council may be or what the mission of creating the guidelines or policies is. And part of it might be. Yeah, just like what is the spirit of what you're trying to do?

17:08

Speaker B

Yeah. And I think I look back on guidelines. I've written brand guidelines, social media guidelines. If someone doesn't adhere to those, it's not critical. If someone doesn't adhere to a upload policy, that's a, that's a different ball of wax. So like, I get that, that differentiation and why you should call it different things.

19:05

Speaker A

Yeah. And if you're in an industry where adherence is critical, highly regulated, you know, being an example there, or we are dealing with, you know, personally identifiable information, things like that.

19:28

Speaker B

Right.

19:40

Speaker A

Then you have to have actual policies and they're probably in the employee handbook and they're probably agreed to by everyone versus, you know, here's how you experiment with AI agents. But like we don't, we suggest not using them to do these things because we just don't know if it's dangerous yet or not.

19:40

Speaker B

Right.

19:57

Speaker A

So yeah, it's. That's a good question though.

19:57

Speaker B

Yeah. Number six, many teams love the idea of AI, but resist assessments, training or structured onboarding because they see them as time consuming. How can leaders overcome that resistance and get teams moving?

20:00

Speaker A

My instinct is to kind of go back to the first couple of answers here with show them the difference. So, you know, again, say, okay, you're in your current role. You are spending five hours a week doing this one workflow that has these 10 tasks in it. Once you complete this training, we expect that process to now take you 20 minutes and you're only going to have to do these two things. And so the more tangible you make the benefit of the training versus yeah, we're just checking a box and I'm watching these stupid onboarding videos and like, I'm not really paying attention and I'm just going to get through it to. If you do this, you are going to be more efficient and productive. And the AI Forward CEO memo, if you refer to that, it clearly states the people who do this are going to have a greater impact on the company. They're going to get paid more money, they're going to have greater, like connect the dots of why you're doing it. And it isn't just to check it off. It is literally to help you transform your own capabilities so you can Make a greater impact on the business and open up career opportunities for yourself. And part of what you do is like, show it. Like, here's Kathy. Kathy went through this training last year. She now, she saved 40% on the podcast production every week. In exchange for that time, she went and launched these two campaigns that drove a million dollars. Like, show them that and don't just like talk about it. So, and if you don't have those use cases yourself or those cases yourself, go find them and like show this. But this does have to be supported from the leadership. It can't just be words on paper. It has to be, we stand behind this. We are going to promote and advance the people who help the company become more efficient, more productive, more innovative, more creative.

20:13

Speaker B

Yeah. I mean, and from an employee side of things, if you said, okay, you needed to finish piloting AI in the next three weeks, right. Kathy, find eight hours to take this, I'd be like, I don't got eight hours to do this. But if you say this needs to be done. And here's why I think, okay, yeah, I'll find the time and here's what's going to not get done, or here's why I need to be more efficient on or whatever. But like, I think also giving deadlines makes people get through stuff. And also I think, you know, if I knew my whole, and maybe this is just me as a people pleaser, but if I knew my whole team was going through this and I was important, like, if I didn't do it, I was screwing up everybody else. I'd be more, I'd be more inclined to do it because I'm like, I don't want to let everyone down. We're doing this together.

21:52

Speaker A

Yeah. And it could be an integration of like some cohort based stuff where you say, hey, we're going through this as a team and we are meeting on December 5th and we are going to spend two hours talking about this and then doing an applied AI workshop where we are going to take these learnings and integrate them into what we're doing. So yes, don't just make. And that's why at the end of every one of the courses we created for AI Academy, I end with an applied AI experience. Because the whole premise isn't to take the course, the premise is to learn the information and apply it to what you do. So that is, that's one way to think about it. To your point, it's like, have it be something more than just course completion. It is application of knowledge that we're going for.

22:37

Speaker B

Absolutely. Okay, number seven, some leaders want hard ROI numbers before approving a centralized AI office or a top down strategy. How should organizations respond when proof is demanded before pilots have happened?

23:15

Speaker A

I think there's very efficient ways to show hypotheticals. And I know hypotheticals isn't maybe the example or even like minimum viable examples. And so the way I have done this and that I've trained it in our courses is take like again, I'll just go back to the podcast example. If your company does a podcast or pick, you know, insert any campaign workflow into there and you know, it takes 20 hours, like benchmark it, get the actual data of here's how long it takes us to do this thing. Then you go through and say, listen, we, we found that we can use Google Gemini to do these five things and it's actually going to save us 10 out of the 20 hours. And here's what it's going to look like. We need Gemini licenses kind of thing like so you can make cases. And we've done this in some pretty big enterprises. This exact model we used, show what it looks like today and how much time and money it takes and then show what it would look like tomorrow. So like we have, we'll put this link in the show notes. I created an AI value calculator that tries to do this where you take how much time goes into something you put in, you know, an assumed efficiency gain, say 10%, the cost of the time that you're currently spending and then like the cost of the tech, the cost of the training and like what is the potential roi. So you could use that tool also to kind of show an example, like a third party tool that helps you project these things, but showing actual workflows, actual business cases of here's what it is today, here's what it could be tomorrow is the best way to do it if you don't have the actual data and it's really hard to refute at least getting permission for pilot tests based on that. All we need is three months and $1,000 and we think we can save $5,000 or we can generate $15,000. Like just make a business case.

23:29

Speaker B

And we've been saying this now for what, four years is like just start with that one small thing, you know, spend a few hours doing something, see how much time you can save, see how many resources you can save and go with that as your first use case. And it's not some big departmental overhaul, but it is like look at this one little thing I can do that is meaningful. Can I do more?

25:17

Speaker A

Yep. And literally, I mean, you could do this in a spreadsheet that's just like column A is the task or the activity. Column B is current time to complete each activity. 30 minutes here, hour there. Column C is predicted time. Like, how much time would we actually save? And if you want, you can throw a column D in there of what is the cost per hour of that time? Like, is it an employee making $120,000 a year, $10,000 a month, 176 hours? Like, you can get to a quick equation. It says, okay, it costs us. I'm just gonna make it up $65 per hour to employ this person. So here's what it would cost us before, and here's what it cost us after from a labor perspective to do this thing. So you. I mean, we all know the tasks that go into what we do. You could knock something out like this in 10 minutes.

25:37

Speaker B

Right?

26:21

Speaker A

That's Gemini to do it for whatever. It's just. It. You can do it. You can build a business case pretty quickly.

26:22

Speaker B

Yep. Okay. Number eight. Are there organizations successfully using a single overarching KPI, like net revenue per employee to measure the impact of AI? And is a unifying KPI even realistic right now?

26:28

Speaker A

So the value calculator that I. That I referenced, that again, we'll throw the link in the show notes does use revenue for employee as sort of a universal metric. It is not a perfect KPI, but it is a very common KPI of how much revenue does each employee in the company generate? So I think I might have said this on the class, but like, in a. In a professional services firm, like, I owned a marketing agency for 16 years, you would usually want, you know, three times the cost of the employee to be the revenue they could generate. So if you're, you know, cost 100,000 a year total comp, you'd want them at least billing $300,000 a year, that sort of thing. Revenue for employee numbers vary by industry. Roughly, they're probably in the quarter million to $400,000 range, depending on the kind of company. Some companies might be much higher than that, you know, into the 500 to 700,000 range. But it is a pretty good metric, and it is a number I expect a lot more companies to monitor because they. And report on because it indicates that you're more efficiently generating revenue as a result of integration of AI. So that revenue per employment number should increase because we should be able to be more productive and more Efficient thereby we should be able to generate more revenue with fewer people is the premise. Doesn't mean you have to get rid of people, but as you grow, you shouldn't need as many people. Therefore your revenue per employee number should keep increasing.

26:40

Speaker B

Unless you're just growing that much and you need more people.

28:12

Speaker A

Right? Unless you're hiring ahead of, you know, ahead of, you know, the growth curve.

28:15

Speaker B

Basically number nine, many companies are realizing their data isn't ready for AI. What's your advice for getting data, data governance and access into shape so AI can actually deliver results? And can AI actually help here?

28:19

Speaker A

Work with experts in that space, you know, bring in the people within your company or the outside advisors you need to do all of those things. The biggest key for me is don't wait for that all to happen to get benefits from generative AI. There's too many times where I've gone into organizations and they're like, oh, you know, the IT team's working on this or the data team's working on that. And so yeah, we've got some copilot licenses, but we're not training anybody yet and only like 20 people actually have access to them. And they're just like waiting this perpetual waiting game. And that is just not the way to do this. There are hundreds of use cases for specifically for generative AI tools that don't need to touch any data. And so that's what I would advise people is just get good at identifying use cases that have low to no risk profile. So there would be no objections from the legal team, the IT team, whomever for the use cases you're pursuing. So that I think that's the key for me is find the right experts who can help you figure out the hard stuff and then get really good at doing the low to no risk use cases where generative AI can create immediate value right now while you're waiting for the big stuff to be solved.

28:32

Speaker B

And then from the side of AI actually helping, you know, if you're using a CRM and you're trying to complete some records, there's AI, there's breeze intelligence, there's things like that built into some of the CRMs that we're using that could help us clean up some of our data, fix some things, populate some things from that side of things. So I think when you say your data's not ready, maybe it's right, more ready than you think it is.

29:43

Speaker A

Yeah. And I will add to that, Kathy, like Microsoft recently announced a custom fine tuned version of Copilot into Microsoft Excel. So they have an advanced version of OpenAI's models is what it is using. But they've fine tuned meaning they took the core model and they trained it to function within Excel as an expert Excel user. And then Google Sheets is doing the same thing with Gemini. So now Gemini is baked right in. Not the most current model that just came out yesterday, but it will soon be. And so you can talk to the assistant right within Sheets and it's been trained to function within Sheets. So you can say, I'm trying to use this data to do this. Can you analyze the data for me and find out where the flaws might be, what are the things I'm not seeing, what are the anomalies within the data? How could I improve the data? So yeah, it's, it is going to be the point where you just talk to the AI assistant that's embedded right into the database software that you're using.

30:08

Speaker B

And it could help you surface like, what am I missing? What do I need to clean up first? Rather than just like you said, sitting and being like, okay, I'm just too paralyzed to get started. Let it help you figure out where you need to start.

31:10

Speaker A

Which is part of this leads to this debate about who's going to benefit most from AI tools and who might be impacted most at the job front. This bodes well for the senior people because the value you extract from these tools comes from asking the right questions. So being able to go in and say, what am I missing here? What are the anomalies in the data? Do you see any trends from Q3 that might be hinged? Like, I noticed this. Did you? If you're entry level, you don't know to ask those questions yet. You're just being trained on these things. But if you're a senior level person and you now know you have an expert data analyst with you on demand 24 7, what would you ask that person? And so that's the mindset shift to just know you have that and like what do I ask it to do? And then to know what it's capable of doing Again, this is why literacy is the foundation of everything. You have to know what AI is capable of, you have to know what the tools you have access to are capable of. And then you got to know what to ask them to do and then what to do with the output and be able to verify that the output is accurate.

31:24

Speaker B

I did that with some Macon forecasting. Like, is this an anomaly? Is this a trend? Is what I'm seeing actually, are you seeing the same thing, stripping out any customer data, just looking at numbers and weeks and things like that. And it's been really helpful.

32:29

Speaker A

Yeah, it's awesome. I've used it many times in the last like 30 days.

32:43

Speaker B

Okay, number 10, AI agents. Our favorite topic. How close are we to real enterprise adoption and what should organizations be prepared for now? What can we learn from more nimble SMBs and startups?

32:48

Speaker A

So people who listen to. I know we have some community members who listen to like, you know, attend all our classes, listen to every podcast and so this might sound like a broken record to some people, but for others, I'll just reiterate some key talking points related to AI agents. One, the definition of what people call AI agents is not universally agreed upon. So there's a lot of confusion. When we talk about AI agents, the main confusion point comes in with how autonomous they are. So how much the human has to be involved in the planning, the execution and then the evaluation of what they output. That's where a lot of the confusion comes in. So what I will say at a very high level is they are marketed as being more autonomous than they are. So you may hear about these agents and think, oh wow, it's going to take my job or I can replace three people on my team or I can like really, you know, 100% increase the, the performance of the efficiency. That is generally not true. What what is happen is AI agents are basically just AI systems that can take actions to achieve a goal you give them or a project you ask them to complete so they can perform deep research. They can go off and look at 100 web pages and summarize the findings and then write a report based on the findings. That's an agent. It's going and doing things. It's often helping develop the plan of things to go do. So you may say, hey, I want to research this topic in Google Deep research. It'll go build the 10 step plan of what to go do and then it'll go do it once you approve it. So agents are at the point where they are very helpful. In some instances. They can be mostly reliable, but they are not autonomous. They cannot do entire jobs for people. We are actually probably years away from that being a case. We're saying we don't need any writers on staff, we're just going to hire agents and the agents are going to do everything that's not what's happening. So the agents are able to take a specific set of actions that complete tasks that are part of a larger job. And so the human is still there. It cannot do the job of the human. But there are some things, especially like customer service or like BDR work, where these agents really can start to do 50, 60, 70% of the tasks that those roles would normally do. So they're evolving. What needs to happen with them is they have to be trained to do specific work. So right now the hot one this week is financial services. All, all the AI labs seem to simultaneously be working on building financial agents. So like OpenAI announced a partnership with Intuit yesterday and they're basically going to try. If you think about Intuit owns, I'm pretty sure they were in like Quicken or there's a personal app I use and they have QuickBooks and things like that. So imagine six months from now you have an expert level financial analyst and advisor embedded right within QuickBooks and Quicken. That's the kind of stuff that's happening and that becomes an agent basically. Hey, I want to evaluate these five stocks. Can you go do it for me? Does it comes back, gives you a report. Same thing in business. I want you to evaluate our spending over the last 12 months in our QuickBooks account. Find ways we can cut costs of things maybe we're not getting full value out of. It'll go through. That's the kind of stuff that'll happen. It'll happen like industry by industry, career by career. But it's going to be sort of a progressive thing over the next couple years.

33:01

Speaker B

And I'm sure that these companies building these agents are watching all the things that we're doing with these agents at whatever form to develop them, to make them. You know, a few years seems like a long way away though.

36:36

Speaker A

I mean my current marker for this is that OpenAI has publicly stated their intention to build an AI research agent by spring of 26 and then a full blown AI researcher. I think by late 27. AI researcher is the probably number one thing many of the labs are focused on building. So if they haven't achieved building the researcher agent, that is their number one priority. My assumption is that agents in other fields are going to probably follow behind. But I'm talking about replacement of job agents by 27. In that instance, agents that can help a marketer or a consultant or a CEO or a BDR or customer service rep, that that literally just takes a startup building a thing to do that. And that is happening now. So you will in 26 see a lot of more advancement in these agents, more reliability, longer term horizons of the tasks they can do. So it's coming. It's just not as here as it maybe feels like or sounds like when you hear these companies talk about their products.

36:49

Speaker B

Yeah. Jeremy and I were talking to someone, one of our partners earlier this week and hearing what they're doing with some of their agents and it was pretty remarkable. And we're trying to figure out what the human's doing. What the agent is doing at this point will obviously grow. But it was very fascinating to listen to what they're doing and where the human staying in the loop and what they're able to just set it and go. It's been interesting.

38:01

Speaker A

The, the custom built, finely tuned agents are way further along than the general agent that I just go into chat GPT and I have its agent mode. Go do something for me. Yeah, so there are definitely pockets where agents are, are moving very quickly.

38:23

Speaker B

Okay, number 11, have you had a chance to use GPT 5.1 yet? And we can throw in the new Google as well. What stands out to you in terms of capability, reliability or safety? And what does it signal about where models are headed?

38:39

Speaker A

So I would refer to episode 180. Right. Are we on 181 right now or is this 182? Okay, so episode 180, that would have dropped on November 18th. The lead main topic was GPT 5.1. So you can go hear the full conversation that Mike and I had. My experience with it has been relatively limited so far. In the first like five days or so that had been out, I have used it quite a bit. Not in any use cases where I can tell a massive difference yet. Like I haven't run it through internal benchmarks of like, okay, here's what you know, I did before, here's what did it after. So it's, it's been very good. The basic premise that we talked about in episode 180 is that OpenAI sort of intentionally underplayed 5.1. It actually seems to be a very good fine tuned version of the model. I think as I explained on the podcast, it's not a new model in terms of they retrained a full new model. It seems like they just fine tuned the existing model for things like writing and coding and math and science and financial advice and things like that. Like they're basically taking the core model and making it smarter at at certain things through reinforcement training. That being said, Gemini 3, which we will talk about on the next weekly episode, which be episode one eight two. Yeah. Coming out next Tuesday. I gave it a use case this morning. That kind of blew my mind. I listened to a podcast, would have been Tuesday night, the night it came out with Demis Hassabis, where he was talking about its visual capabilities, like to look at handwriting or to look at like a whiteboard and process things, even with like super sloppy writing. I gave it a whiteboard session from an internal meeting from a few weeks ago. Honestly, like, it did a better job of organizing the information than I would have done. And I'm the one that wrote everything on the whiteboard. And I was struggling to understand it myself. And I was like, oh, let me see. This is what Demis said it was good at. So I literally just gave it. I said, here's a brainstorm session. Here's what I'm trying to understand. Here was the gist of the meeting. And it's like, boom. And I was like, all right, what. What should we be thinking about that we didn't include in this? And it like gave me this ten point. It was really good. That's so cool. Gemini 3 is all reports right now from people who had early access are that it is a. It is the state of the art now. It is, it is beyond, in some ways a leap ahead of the other models that are out right now.

38:52

Speaker B

Wow. So when Claire is doing these episodes, AI answers, she's using AI to help her remove the redundancies. Everything I said earlier in the episode, and she put a note in and she said, you don't have to listen to these insights. But the insights from 5.1 were distinctly different and abundantly more helpful compared to earlier models.

41:27

Speaker A

That's cool. Yeah, it's better at thinking like, that's the main thing. 5.1, which means like it's reasoning capabilities that is a frontier that's being pushed by all the labs is the, you know, continued advancement of reasoning with which lets it solve more complex problems, more strategic situations.

41:48

Speaker B

Yeah. Number 12 as generative AI reshape re. Reshapes search. What should marketers know about the shift from SEO to geo? How should teams adapt to where search traffic is actually going?

42:09

Speaker A

Gee, I've heard a E O G. Yeah. So yeah, basically, like how do we show up in the language models is like the question people are basically asking. So yeah, again, if you're not in this world, if you're not in like SEO or marketing, and you're maybe just a business leader, educator, somebody listening to this. In, in the marketing world, for 25 years, you have been trying to show up high in search results on Google. And so you do that by publishing content, authoritative content through blog posts and podcasts and webinars and, you know, you do all these things to get found on the Internet and then there's some, you know, there's other things like meta descriptions and stuff like that. So the question in the last couple years has been, well, how are these language models? How are these AI assistants surfacing information as they start to integrate more links into the outputs of what they do? We have AI mode in Google, which is becoming more and more prominent by the day and figuring into the future of what they're doing. How do we show up in those results? And the short answer is no one's really sure yet. Probably easier to gauge how Google's doing it than OpenAI and Perplexity and others because they're newer to that game and how they index things and how they surface things, where that's Google's life for 26 years. That's what they've done. So the general advice that we give right now is diversity of content in many places. So when I think about our own strategy, you know, we do the podcast, it gets published on podcast networks, it goes on YouTube, there's a transcript. We do have a blog post with the transcript in the show notes, but we also enable the transcript to be other places like just get the information out there, go do podcast interviews, go do, you know, be in all these places? Ironically, I was having this conversation with somebody last week. PR probably actually is, may have a renaissance because media reports, clippings, mentions in those kinds of things where you're going and earning media might actually be incredibly important to showing up in these, the AI assistants and within agents and things like that. So, yeah, it's something to keep an eye on. There are people who are way more expert at this topic than I am. Will Reynolds, Andy Crestina, a couple people that have been talking about this. Chris Penn talks a lot about this. So I would say go find the people. If this is what you do and you really are trying for 20, 26, figure out what is our plan, what are we going to do? Go look at that. Go run a deep research project, say, what are the smartest people in the SEO space saying about these things and what are the strategies you could be looking at? It's a good one to lean on a Gemini 3 or a GPT 5.1 and see what they can help you with.

42:23

Speaker B

Yeah, the things that I'm hearing are like, go back to our content marketing roots, be helpful, be relevant, be consistent. All of those things matter right now more than ever.

45:16

Speaker A

Yeah. Be where your audience is. Don't. Don't make them come to your site. That might. It might just be AI agents coming to your site. Like, yeah, just solve for the customer. As crazy as it sounds, it's like everything that's old is kind of new again.

45:26

Speaker B

Yep. Okay, number 13, before we close out, anything else you're watching that you think organizations should keep an eye on in the next few months? I know you're going to talk about predictions at some point soon, but any.

45:39

Speaker A

Thoughts? I just. And again, I don't want to end this on a negative note, but I really think that there's going to be a far greater impact on jobs than people realize in the near term. And I mean that with like three to six months. And I would just tell people to have a sense of urgency to figure this stuff out, to keep taking the next step to improve your knowledge and capabilities, to do what you can to bring along the people in your company who maybe don't want to do this or sitting on the sidelines. I think we need not fear, but we need a greater sense of urgency to figure this stuff out. The models are getting smarter. I say this all the time. They're getting smarter. They're getting smarter fast. Just Based on the Gemini 3 data from yesterday, there are no signs of a slowdown. There's a lot of. In the US there's a lot of chatter right now, a lot of chatter around AI regulations at the state level, and the federal government is trying to stop the state level efforts. Those regulations could play in here, but I just really feel like there is a necessity to figure this out for our own good. But there is also this incredible opportunity to frame it in a positive way, to truly just reimagine what we can do with our businesses. And I know for me, every day, I just want to find those four hours in the day to just sit back and think about what's possible now. What can we do next year with events, with education, where we can just totally do something new and exciting and. And that, to me is like an amazing time to be living through that. We get that chance. Like the board thing. I mean, I literally thought that this morning driving home from the gym, like, oh, my God, I want to build that, like this weekend. Like, in my head, I'm like, so excited to do this thing. And I don't have to go hire developers. I don't have to find anybody technical. I literally just have to find like a half hour to write a system prompt thrown into Gemini and see what happens. Like, that's amazing. And so I think if we have that mindset of all these things that we can now do, if we understand what it's capable of and learn to look at problems differently and growth opportunities differently. So again, I sense of urgency on both fronts for the good stuff and the stuff that might affect us negatively, that we're at least honest with ourselves and our peers about it and we're doing something about it because sitting back is just not going to help anything.

45:53

Speaker B

Right. Which, you know, brings us to our free content. We have lots of stuff. You can buy lots of things from us. But we also have two awesome free classes that we do once a month. Our intro to AI, which our next one's December 3rd. Our next scaling AI is December 12th. Those both happen at noon Eastern. You can go to our website and register for free. Send your teams. If you're one of the leaders who is trying to get your teams to understand some of this, let Paul explain it to them and then get to work. So I would implore all of you just to take, come take the class. If you've done it before, come back. We'd love to see you again and let us help you get there.

48:21

Speaker A

And if you're ready to really drive the transformation, you know, go to Academy SmartRx AI, check about all the professional certificates, get started with the foundations collection, and then personalize your learning journey or your team's journey from there. That is a big focus of what we're doing right now, is trying to make that stuff as robust and customizable and valuable as possible. So it really helps people be ready for what's coming and bring others along with them.

48:56

Speaker B

So absolutely.

49:23

Speaker A

All right, thanks, Kathy. We will, as I said, Episode 182. We will be back on Tuesday, Thanksgiving week, we will have another episode and yeah, that'll be my third one of the week. Keep going, keep going. All right, thanks everyone.

49:24

Speaker B

Thanks, everyone.

49:39

Speaker A

Thanks for listening to AI answers. To keep learning, visit SmarterX AI or where you'll find on demand courses, upcoming classes and practical resources to guide your AI journey. And if you've got a question for a future episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions about AI.

49:41