The Artificial Intelligence Show

#194: Agentic AI Timelines, Generalists vs. Specialists, Resume Tips, AI Learning Ownership, & Handling Model Updates

52 min
Jan 29, 20263 months ago
Listen to Episode
Summary

This AI Answers episode addresses 15 questions from SmartRx's Scaling AI classes and Marketing Talent AI Impact webinar, covering topics from AI education ownership and change management to agent timelines and hiring practices. Host Paul Raitzer emphasizes the importance of personalized AI training, proper change management, and the current advantage of generalists over specialists in the AI-driven workplace.

Insights
  • AI education success requires dedicated ownership with clear goals beyond just course completion - someone must own adoption metrics like GPTs built and time saved
  • Entry-level and middle management face the most AI disruption risk, as experienced professionals with domain expertise are currently most valuable when paired with AI tools
  • Cultural resistance to AI adoption can be identified when personalized training and tools still result in low utilization after 30-60 days
  • Building proprietary AI solutions often results in outdated capabilities by completion time, making dynamic commercial solutions more practical for most organizations
  • Generalists currently have advantages over specialists due to AI's unpredictable impact on specific domains, making diverse knowledge and connection-making skills more valuable
Trends
AI agent development progressing rapidly with industry-specific specialization rather than universal general agentsShift toward compensating AI-proficient employees who create disproportionate value through bonuses and alternative compensation modelsGrowing demand for AI change management consultants as organizations move beyond pilot projectsCredit-based pricing models for AI tools creating unexpected cost management challenges for enterprisesTransparency becoming critical for maintaining brand trust when using AI-generated contentLiberal arts education gaining value for AI-era workforce due to emphasis on connecting disparate conceptsReal-time AI education and personalized learning paths becoming necessary due to rapid technology evolutionBuild vs buy decisions in AI becoming more complex due to rapid model advancement cycles
Companies
SmartRx
Host company providing AI education through AI Academy and hosting the podcast
Google Cloud
Sponsor of the AI Answers series and partner on AI education initiatives
OpenAI
Referenced for ChatGPT development timeline and model updates affecting enterprise deployments
Microsoft
Mentioned as example of enterprise AI provider with lengthy custom implementation timelines
HubSpot
Referenced for their early Customer Happiness Index model as inspiration for AI success metrics
Harvey
Cited as major AI agent player in legal industry worth billions specializing in attorney work
Anthropic
Referenced for AI constitution development and potential AGI advancement
McKinsey
Used as example of organization deploying tens of thousands of AI agents
Screen Dragon
Presenting partner for the AI for Agencies Summit virtual event
People
Paul Raitzer
Host and CEO of SmartRx, founder of Marketing AI Institute answering AI implementation questions
Kathy McPhillips
Co-host and Chief Marketing Officer at SmartRx discussing AI adoption strategies
Amanda Askel
Philosopher who created Anthropic's AI constitution, cited as example of generalist value in AI
Jeremy
Director of Marketing on Kathy's team, mentioned for knowledge sharing and screen recording practices
Claire
Team member who coordinates questions and created AI video content for Macon conference
Mike
Regular co-host of the weekly Artificial Intelligence Show episodes with Paul
Quotes
"The fact that you invested the energy to do the thing is actually what makes it worthwhile. It's why human art Differs from AI Art"
Paul RaitzerOpening
"I struggle right now to actually figure out what the entry level roles are at SmartRx. And I would gladly hire as many entry level people as I could. I just don't know what to have them do right now"
Paul RaitzerMid-episode
"Course completion does not equal success. It's a leading indicator."
Paul RaitzerEarly episode
"If you don't have someone who can fill that role, then I would think of it almost like an EOS consultant"
Paul RaitzerMid-episode
"Sometimes the time is the point. Like the fact that you invested the energy to do the thing is actually what makes it worthwhile"
Paul RaitzerLate episode
Full Transcript
2 Speakers
Speaker A

The fact that you invested the energy to do the thing is actually what makes it worthwhile. It's why human art Differs from AI Art welcome to AI Answers, a special Q and A series from the Artificial Intelligence Show. I'm Paul Raitzer, Founder and CEO of SmartRx and Marketing AI Institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of AI. But we never have enough time to get to all of them. So we created the AI Answers series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing. Whether you're just starting your AI journey or already putting it to work in your organization, these are the practical insights, use cases and strategies you need to grow Smarter. Let's explore AI together. Foreign welcome to episode 194 of the Artificial Intelligence Show. I am your host Paul Raitzer along with my co host today, Kathy McPhillips, our chief marketing officer at SmartRx. Hello Kathy. Hello. If you're wondering why Kathy is with us today is because AI Answers is a special edition of the Artificial Intelligence Show. This is in addition to our weekly that Mike and I do. Every other week or so we do an AI Answers episode that is presented by Google Cloud. It is a series based on questions from our monthly Intro to AI and Scaling AI classes. So if you're new to those, every month we do a free Intro to AI class and a free five Steps to Scaling AI class. And so at each of those classes we get dozens of questions that we can't get to during the live session. And so we the week following we record and then air a special edition where we answer some of those questions. So today's is following Scaling AI the last class we did. And we are also mixing in some questions from a marketing talent AI impact webinar that we hosted on January 27th. So we released a new report as part of our AI Industry Council and we had amazing questions from from the audience. So we've mixed in some of those questions as well. So again, AI Answer episode that is special. We do it bi weekly. This one is featuring specifically questions from Scaling AI and the Marketing Talent AI Impact webinar. So again, special thanks to Google Cloud for sponsoring this series as part of our AI Literacy project where we try and make AI education as accessible as possible to as many people as possible. So the Google Cloud Marketing team has been an amazing partner of ours for the last year. Year or so in addition to sponsoring the AI Answers podcast series, they are our partner on the Intro to AI and Scaling AI classes as well as a collection of blueprints and the Marketing AI Industry Council. You can learn more@cloud.google.com and then a few notes. Kathy, before we get diving right into these questions, Macon AI1. You can go learn more about Macon. It's coming out October 13th to the 15th. This is our seventh annual conference. We're expecting 2,500 plus I think I'm safe to say.

0:00

Speaker B

Kathy is our more safe to say.

3:12

Speaker A

Okay, the 2500 plus is our goal there. But the key here is the call for speakers is open. So if you have a great story to tell, an AI transformation story, specific knowledge and capabilities that could fit into our Strategic AI track or our Applied AI track, we would love to hear from you. There is a link to submit to speak at Macon that is there now. So we are in an active call for speakers for Macon 2026 and we would love to have you submit if you've got a story to tell. We also have our AI for Agencies Summit coming up on February 12th. This is a free virtual event presented by Screen Dragon. There's already, I think we're well over 1500 already registered for that event. We're expecting probably 3000 for that event. It's from noon to 5 Eastern on Thursday, February 12th. Again, that is aiforagencies.com. you can go learn more about that. And then one other free thing I'm going to mention is we have AI for Department Webinar Series Week happening. We're of part really excited about this. It's a new way we're trying to do this. We are launching new downloadable free blueprints that are specific to AI for marketing, AI for sales, and AI for customer success. But the way we're going to do that is we're going to host a live event each day on February 24th, 25th and 26th to give a preview of those blueprints and then to make those blueprints available so you can register for one of those or all of those. And the URL for That, Kathy, is SmartRx AI forward slash webinars. Is that right?

3:14

Speaker B

That is correct.

4:45

Speaker A

Okay, so again, AI for department webinar series totally free, February 24th, 25th, 26th and then you have AI for agencies. That is a free registration option on February 12th. So again, all of all part of this AI literacy project. We're trying to do as much as we can to make as many of the educational programming and events that we offer as accessible as possible to anyone. So if I missed anything, Kathy, go ahead and hit it. Otherwise we've got it looks like 15 questions we're going to get through in the next 40 minutes because I have a meeting.

4:45

Speaker B

Yeah, it is like AI all the time around here.

5:17

Speaker A

It really is. And my life doesn't change. I still have a million meetings and I still am the CEO of this company. So all these podcasts, we just sort of squeeze it into the hour that's open in my schedule. So I have not looked at these questions. We're going to go just like we do in the live and see if I have answers. If I don't, I'll punt it and tell you I don't know.

5:20

Speaker B

That's the best thing. You're honest about it.

5:39

Speaker A

Yeah.

5:41

Speaker B

Okay, let's jump in. So the way we do this is we take the questions from everything that we're doing. We put them through AI to help us prioritize what we haven't answered on previous AI answers things questions from the class that we thought were really good that our listeners would like. Claire on our team is the human that looks through all of these and then she sends them over to me and I run, run them through to make sure that they flow that just a second set of human eyes to make sure that Paul hasn't answered it six, six times. So here we go. All right, number one, if training and education consistently block AI progress, who is actually responsible for AI learning today? Are L and D teams stepping up or is ownership shifting by company size?

5:41

Speaker A

This is definitely. I don't think there's a universal answer to this. I think in larger enterprises L and D is, is certainly the most likely party responsible for this. I think I might have mentioned this recently and again, I have no idea what webinar or podcast it was on. So I'll just say it again. One of the things we see and Again through our AI Academy by SmartRx, we talk to a lot of businesses that are buying licenses for their teams or their departments or their organizations. And what we often find when we're talking to teams, so let's say like a CMO reaches out and wants to get licenses for the whole marketing team. One of the first questions our business account team will ask in the discovery call is who's going to own this? So it's like great. It's kind of like if we go buy 50 co pilot licenses and give everybody, it's not going to work and education is the same way you show up, you buy 50 licenses to our AI Academy. There's a wealth of stuff that can drive adoption real fast. But if someone doesn't own making sure that it's actually going to happen, then it's going to go nowhere. So if you don't have an L and D team taking the lead on this, or an HR team or whatever it might be, if you're going to commit to investing in education training, which you should be doing, whether it's with us or someone else, someone has to own that and not just like coordinating it. I'm saying own the goals of that program. So if you have goals around, number of GPTs built, time saved, new projects launched, whatever your goals are for the education program, someone has to own that and have responsibility to that. So we do not see a universal approach. We see a lot of people who don't have owners for this, honestly.

6:24

Speaker B

And course completion does not equal success.

8:05

Speaker A

It's a. It's a leading indicator. Like, you know, we look at things like that ourselves. And actually, ironically, right before this was, I was coming on and so I obviously haven't talked to Kathy about it. It's literally just on my whiteboard. I was thinking about, like, back in Hubspots, early days of HubSpot, when we were. My agency that I sold was their first partner back in 07. They had something called Chi, and it was a customer happiness index. And they used it to actually monitor utilization of the software as a leading indicator to happiness with the product and then, like, renewal. I have no idea if they're still using a form of QI today, but I was thinking about, like, what. What is our CHI like model? What is our success score within our team that tells us that someone is not only getting value from the academy, but actually is likely an indicator of true transformation happening in their company? And so I think some concept of that, which maybe once I build that next week, maybe it'd be helpful to share that with people because I think there are leading indicators that can tell us that you're probably heading toward a successful adoption of an education and training program.

8:08

Speaker B

Yeah. And when we hear from people in our community saying that they've been successful with this, it's like, we want to hear those stories and hear how you did it. You know, Noah is hearing stories from customers every day on things that they're doing. How can we take all of that and then better be able to serve our customers saying, like, okay, thanks for buying courses, but we want to set you up for success.

9:12

Speaker A

Yeah. We are actually in the midst of getting ready to launch an AI Transformations podcast series. So another special edition that we'll do where we'll feature these stories. And then as part of AI Academy, we're working on a similar concept where we'll feature community members, mastery members, and their stories of transformation, personally and at the company level, because I think we all want to hear those stories and be inspired by them.

9:29

Speaker B

Yep, absolutely. Number two, would you recommend bringing in a dedicated change management consultant for an AI initiative? And if so, at what phase do they create the most value?

9:50

Speaker A

I would definitely consider it. I think it's much like the education and training thing. If you have someone on your team who can fill that role, that might be one of those, you know, what are the new roles people are going to have thing. And maybe that's. That's one of them. Maybe it's an operations person that's focused on AI change management. Maybe it is someone on the HR side. I don't know. Like your organization would sort of dictate where that would come from. But if you don't have those people on your staff, then I would think of it almost like an EOS consultant. Like, if you want to put the EOS system into an organization, you look around and say, do we have an integrator that can do this? And if not, then you might, maybe you hire someone who does EOS consulting. So. So, and I know, Kathy, you talk with our community members more than I get the chance to, but there are people who are putting themselves in this sort of position to be those change management consultants who can come in and do that exact thing. So I think for a lot of organization, that's going to be the faster path, is just bring someone in who can help advise on this and then when the right time is is probably more a question of when is your organization ready for it to have the change management consulting versus just running some pilot projects?

10:01

Speaker B

Or is that their consulting role is to figure out are we ready, what do we need to put in place to be ready?

11:07

Speaker A

Yeah. All I know is I've talked with a number of consultants who've been wanting to do the change management consulting for a couple years and are qualified to do it. And their clients and prospects weren't ready for that conversation yet. It's like they were offering a service that was needed, but the clients didn't really know they needed it yet. They just wanted to know how many ChatGPT license should I get and what GPT should I build? And it's like oh yeah, we'll do all that other stuff down the road. That change management stuff sounds great, but like, I don't need it right now. I just need people to actually use the tools. So it really is a readiness thing for an organization of when being honest with yourself, when are you ready for that to happen?

11:13

Speaker B

That flows nicely in number three. What role should middle management play in normalizing AI adoption without creating fear or resistance?

11:53

Speaker A

I, I think the job of any manager, whether it's middle manager or above, is to, to help people find practical use cases that make an impact on their job every day. So I'm obviously as big a proponent as anyone and like, we should be using AI to drive innovation and growth and do all these amazing new things. But the reality in most organizations is they still need someone to get them that first 10 yards. Like, they need them to get to that first success of like, oh, okay, so this is what a GPT does. Like, now I get it. Can I, okay, can I build a couple for this? Like, and so I think middle management in a lot of ways or any management's function is going to be helping people get to that first success and then stack successes. So they are focused more on optimization, efficiency, productivity early on. But they need a lot of hand holding. Like there, there's always going to be that 10 to 20% of your team who are racing ahead and they're trying all the new stuff. But then there's going to be that messy middle part where they kind of want to do this but they don't know how. And then there's going to be the 10, 20% want nothing to do with this. And so I think the job of managers is probably going to be to focus your energy on that middle part of the people who want to do it but don't know how to do it.

12:03

Speaker B

So things like guidelines, you know, guardrails, enabling them, telling them to go do it. A lot of that is just like they don't know what they don't know and they can.

13:19

Speaker A

Yeah. Without saying, okay, you know, salesperson, here's five recommended use cases with sample prompts. And we actually built you a GPT. Like, we'll take an hour and let's go through this together. And then customer success team, you know, here's what we've designed for you. So I think these playbooks where you actually roll it out and you do personalized examples for people is the most fundamental way to do this successfully. And so few organizations are doing it that way.

13:32

Speaker B

Yeah. And just knowledge sharing, you know, like yesterday Jeremy was doing, he was doing something this week and I said, okay, can you actually, can I, can you screen share with me while you're doing this next.

13:59

Speaker A

Yeah.

14:08

Speaker B

Or can you just record it so I can watch it? Because you can tell me all day long, but I do better visually. So if I would like to learn how you're doing that. And that's not AI necessarily, but it's. And it does certainly app apply to all of the AI tech that we're using.

14:08

Speaker A

Yeah. And Jeremy's a director of marketing on Kathy's team. Yes, Jeremy is.

14:21

Speaker B

Number four, what signals tell you an AI pilot is failing because of culture and not technology.

14:27

Speaker A

If you thought it through and personalized it to the individual or the team and they're still not doing it. So you know, let's say you roll out ChatGPT to the marketing team or the sales team or the CS team and in that rollout process you do a one hour kind of workshop demo. Here's what it does, here's and marketing team, sales team, CS. Here's sample prompts we've built for you, here's GPTs you can each use. Like you've done the hard work up front to make it easy for them to adopt and then you monitor daily or weekly active usage and you don't see it happening that that tells you it's failing because of culture or a very tight window where okay, maybe people were just too busy that seven day period. But if you look over a 30 day, 60 day period and you're seeing a lack of utilization even after you've done the personalization of the technology to them, then you may have a culture issue. The other way you can do it is potentially upfront. Just surveying people before you launch all these pilots and getting at the sentiment in the organization of like, know how excited are you about integration of AI into your roles and things like that? Assuming people are going to be honest, you, you could probably very quickly realize like, wow, okay, there are 40% of people who are neutral at best to the use of AI in their jobs. That's going to tell you right away you've got some other barriers to deal with and it's not going to be as simple as giving them some education and training and some tech and assuming they're going to jump at it and start adopting it.

14:36

Speaker B

Sure. Okay. Number five, as roles are reinvented by AI, learning becomes a moving target and time for learning is finite. How should leaders think about the size, pace, type and volume of learning versus the need for rapid experimentation and deployment.

16:11

Speaker A

I think personalization again comes back to the answer. So we are actually, we were in a conversation about this a couple days ago internally with Jess, our head of learning, and some of the other people on the team as we're trying to solve for AI for executives. So this is something I've been working on for a while and trying to envision. Like, what does that look like within the platform, within our AI academy? Because we get people reaching out to us saying, hey, we want to educate our CEO, but he or she isn't going to take six hours and do something like, I think I can get 60 minutes with them next month. Like, what should I prioritize for them? So we intentionally design our academy to enable these, like, personal learning journeys. And so what I was thinking of on the AI executive side is I think it has to be like 45 minutes, like personalized to the individual role, like, say CEOs or CEOs. And by the way, I probably shouldn't even be divulging all this. Like, I'm just kind of telling you the roadmap. But if I'm the CEO and I want to know this, I know I could give you an hour of my time, like, that's about it. And whether it's like audio format or I'll actually go in and watch the course, or someone's going to like come in and show the course with me, we're going to talk, whatever it is, I know I've got that. And then you want this, like, custom learning. So the CEO doesn't care about a certificate. Like, that's the C suite level. They're not there for the professional certificates to put it on LinkedIn and to like show, build their resume. They just want the fundamental knowledge so they can make better decisions in their organization and guide their team. So when I think about that, it's like, okay, we have to design our curriculum to accommodate that Persona or that role versus someone who's like, I want to become a change agent within my organization. I want all the knowledge, I want every certificate you all can create. I want to know different industries. And so you have to think differently about what is the goal of the individual and then how do you adapt the curriculum to them over time. And so that's why I say personalization is really the only way to address this. And some people want our Gen apps. We drop a new one every week. And I'm sure there are people in our community, Mastery members who are watching every one of those. And then I'M sure there's other people who are like what we're trying to consider is okay, let's like make sure there's a long tail of these because maybe somebody just wants to know how to use AI within Google Sheets or Excel and like that's the one thing they're going to watch because it's like super relevant to them and they're not so interested in the latest video generation technology. And so for us we see that as our job is to create this very diverse collection of very relevant and near real time education so that people can adapt learning journeys. But I think individual we have to like consider where people are, what their goals for learning are and then be realistic about how it's going to fit into their schedules.

16:27

Speaker B

Sure. And then even for like the Gen AI app series, just to frame it around our education. But there are other examples. Obviously we were, when we first started it was like let's do Gen AI different technologies every week and now it's turned into different use cases because the same tool across your organization can be used so many different ways. So focusing again on what problem is that person, that department trying to solve for.

19:18

Speaker A

Yeah. And the way we've kind of guided the gen apps now is like we're very focused on features within apps and platforms versus the whole platform because again the gen app reviews for us every Friday is supposed to be 15 to 20 minutes. You can't review ChatGPT in 20 minutes. So like you can do the agent mode in ChatGPT in 20 minutes. And so that's what we think about the gen app is like features within apps and platforms so that it's like quick hitting and meets like that very specific need.

19:41

Speaker B

Okay, number six, this is a really good question.

20:10

Speaker A

They've all been really good so far.

20:13

Speaker B

Yeah, we often hear that education and awareness are the biggest AI roadblocks. But our struggle is more about whether people have the right skills to use AI as thinking, as a thinking partner and automation tools. What specific skills should companies prioritize when hiring and how do you actually assess things like critical thinking?

20:14

Speaker A

This is a really good one. I feel like there's like multiple questions in here. So I'm trying to like parse this in my brain about answers. So I think the education, the awareness is often to me probably the leading indicator. The lack of education I think many times comes from the lack of awareness at the executive level. So if there's a lack of awareness about AI capabilities, then you don't commit to building the right education. You don't commit to internal academies, things like that. So those are very significant roadblocks. And then the skills to know to use it as a thinking partner, automation tool, things like that, that comes from proper education, reskilling, upskilling, where you're showing them practical ways to do this. So I think there is this mix of just this foundational knowledge needed of what is a reasoning model. So if I'm going to use it as a thinking partner, I almost need to understand what the reasoning capabilities are of these models. Because if I don't know that they can go through this chain of thought and they can build a plan to do something, I'm going to be probably less reliant on it as a thinking partner. So I don't know, I mean in terms of skills to actually look for and prioritize in hiring, I think it's probably still, you know, if we're looking at critical thinking stuff, you're just doing problem solving but where they don't have access to the AI. Like I want to here, here's a problem we're trying to solve. How would you do this? Like build your action plan to solve this. Like let's talk about that. Okay, now if I give you chat GPT, how would you do this?

20:31

Speaker B

Right?

22:07

Speaker A

And like let's do a real live demonstration because what I would want to see is what are their follow up prompts and questions to the AI. So an example I've given recently is I'll often when I'm doing it as a thought partner, I'll say let's go through this step by step because the AI wants to just solve your problem right away. And they're like, oh yeah, here you go, here's the thousand words on the question you asked me. It's like no, no, no, I want to do this together and I want to understand each step you're taking. And I might take us a different direction if I interviewed somebody who explained what I just explained. Like you're hired, like let's go. Like that's awesome. So I don't think enough people know that though. But I also think we can't be overly judgmental of people that don't know because the company they're at might not have provided them the training to know that. So then you do have to actually just zoom out and say okay, let's think about the human capabilities here and do they transfer to working with AI once they're given the proper tools and training?

22:08

Speaker B

Right.

23:03

Speaker A

And that is where you just have to get at. Are they intrinsically motivated? Are they? You know, they think well on their toes. Are they good communicators? Are they strong writers? Like all that stuff still matters.

23:04

Speaker B

Right? And are there hiring managers who know how to even assess for that right now?

23:16

Speaker A

That's tough. Yeah, there aren't many.

23:20

Speaker B

Okay, so Tuesday we had the Marketing Talent AI Impact webinar and report. So this is coming from that webinar specifically. Do you see AI driven talent acquisition trends extending beyond marketing? Does age or years of experience matter as much as adaptability?

23:25

Speaker A

These are really good, tough questions. So I've been pretty vocal lately that I think the people who are going to experience the most disruption are going to be entry level and middle management. And what I mean by disruption is I think they're going to be the ones that are going to have to very aggressively pursue AI literacy and demonstrate their capabilities with AI. Because right now the most valuable user of a chatbot is someone with years of experience and domain expertise who knows how to talk to it and ask the right questions and knows how to like continually prompt it and then assess the output of what it gives them. And that's what entry level people and often middle management lacks, is that deep domain knowledge that makes them really good at working in collaboration with the AI. So I think that the years of experience right now favors people who can also work with the AI. And that's why I say like when you're entry level middle management, I think you just have to above and beyond prove your ability to work with these tools at the highest levels so that it's never a question of that. I've said before, like, I struggle right now to actually figure out what the entry level roles are at SmartRx. And I would gladly hire as many entry level people as I could. I just don't know what to have them do right now because most of what we would historically hire entry level people to do, the AI can do. And I don't like saying that. I'm not saying it because I don't want to hire as many humans as possible. I'll keep hiring as much as we can, but I'm also not going to hire someone that is going to come in and be obsoleted in six months.

23:43

Speaker B

Right. Okay. Number eight, should companies be doing more to protect their AI leaders and superstars, possibly even through contracts as AI capabilities continue to advance?

25:33

Speaker A

Yeah, I'd have to think like what could be, can what could be included within here. But I would say there's a lot of complexity here. I don't remember if it came out in the marketing talent AI impact report. I know it was at least discussed in our group settings, so I'll throw it out. There is one of the council members brought up the idea of, you know, if, if one employee. So let's just say you've got, I'll just like pick some numbers here. Let's say you've got five people at the director level who are all making 150 plus a year. One of them is nights and weekends building GPTs, building prompt libraries, sharing those with the team. One of those GPTs leads to a new line of business that generates a million dollars in revenue. Like, does that person get deserved to get paid the same 150 as the other directors? Hell no. Like, and they know that, like they know the value they're creating. And so I think that's the challenge is when you have these people who take these leaps and start really driving innovation and growth through their work with AI, they're probably happy as hell that they're getting the opportunity to do this. They're probably like feeling more fulfilled in their job, but they're also going to look around and be like, wow, I'm creating a disproportionate amount of value in this company right now. So that came up in our council meetings of like, how do you compensate that person? Like, and I think that council member in particular was treating it more as like one time bonuses. So it's like we can't just change our pay scale. Like you're still a director, but if you create something that creates disproportionate value, you're going to get a one time bonus of $50,000. Like things like that. So I do think we need to really start contemplating when people do show, you know, true ability to make a difference through their own investment of time and energy to become like, to master, you know, AI use in the company, they have to be compensated for that. However you do that, stock options, bonus plan, whatever it is. And again, I don't know very many companies that, that have solved for that yet. They're just starting to ask that question.

25:46

Speaker B

Right. Okay. Number nine, I manage a small team of media buyers and we're not allowed to use AI tools with confidential data. How can teams like mine still get real AI experience?

27:54

Speaker A

I would try and find ways to anonymize the data so it's not confidential. Like so, so you don't run into that issue. That would be my first reaction. Is there a, is there a way to get the proprietary information out of it? Where you're just trying to inform the media buy. The second option would be if that is core to your business, like if you know 50% of your revenue comes from media buying, then I think it would be 100 worthwhile to talk to someone internally in it or to talk to an outside advisor or consultant who could build you a proprietary tool on open source technology or, you know, something that lives within your walls that isn't going to these model companies to where the team is confident that you can now use the technology without concern of leakage of proprietary information. So yeah, if it's core to your business, I wouldn't stop at no, you're not allowed to. It's like, okay, well what's plan B? How can we make this safe to do so that we're allowed? We're able to do it. And one way to prove that if you need to make the business case is use dummy data. Go into ChatGPT or Gemini and say, I want to prove a business case for building a media buying tool. My team is concerned about confidential information. Here is a template of what a database looks like, what a spreadsheet looks like, the kind of information I would give you. Can you fill this out with dummy data that we can use to build a sample project so I can go to my executive team and say, look how much time I could have saved, look at the insights we could have had. This is all dummy data. It's safe. So I think you gotta think about how to make that business case. I wouldn't give up though. If it's, if it's core to your business, it's worth the time for sure.

28:08

Speaker B

And I was in planning and buying like six careers ago and it was like, I mean, I'm thinking about competitive research, sentiment analysis, just deep research, learning more about your customers. I mean, there are so many things you could be doing that don't involve data at all.

29:51

Speaker A

Yeah. And it just for, if somebody's maybe not in this industry, media buying literally just means like someone who's like, okay, we got a million dollars to spend to promote our product, our brand. Where should we spend it? Online channels, social media, Google Ads? Like that's what a media buyer does. They figure out how to allocate that spend for the, you know, the best cost per thousand, you know, best impact.

30:07

Speaker B

That's what I used to say. And I was like 24. I was like, I spend other people's money for them.

30:27

Speaker A

True.

30:33

Speaker B

Okay, number 10, if we zoom out, ChatGPT moved from 3.5 to 5.2 in about three years. If AI agents are at a ChatGPT one stage today, how fast do you think this could advance? Could you realistically see an agent's GPT4 moment within a year?

30:34

Speaker A

Yes. I wouldn't say they're at GPT1. I would say they're probably at ChatGPT3. Maybe at this point this is going to be uneven. The way I've explained this is agents are being built to be more autonomous and more reliable by industry and role. So it's not going to be like we flip a switch and GPT6 is all of a sudden a universal agent that works the same for every profession. Right now it's AI researcher. They're all trying to build AI agents that do the job of an AI researcher because it is the most compounding value thing they can create. Once they automate AI research, now they can massively scale up their own labs. But what also then happens is you pick verticals and venture capital firms fund the building of agents in specific verticals. So a popular one right now would be the legal industry. Harvey is the major player. They're worth billions of dollars. They're basically taking the core model and they're doing their own fine tuning and training to make it specialized in the legal industry to do the job of an attorney or an associate within a law firm. So the same thing could happen to any industries. People can be doing it for marketing, industry sales operations, you can go into healthcare, all these things. So I think it'll take time before we look around and say wow, like AI agents just didn't. Regardless of industry, we're just at that moment where they're just better than humans. And I'm going to start hiring agents instead of people. I don't think we flip a switch at the middle of this year. And just like these universal general agents exist, the other thing I've said recently is even if they did, like let's just say somehow someone reaches AGI later this year, anthropic, Google, whomever, and they have built a universal agent that can do everybody's work better than the average human. If you look at how long adoption curves are taking for just basic chat experiences just to use text in text out that we've had for over three years now. And most organizations are still at the piloting phase at best. I think if we had a universal AGI agent this year, it could be 2028, 2029 before most organizations even fare what the hell to do with it. Like, so I just, it's going to be uneven in terms of when it's available to different industries. And it's going to be extremely uneven in terms of when it's actually adopted and changes industries and companies.

30:50

Speaker B

Okay, number 11, how are organizations managing agent quality and decisions around LLM selection amid constant change? I keep thinking about McKinsey rolling out tens of thousands of agents and then OpenAI changing default models. How do teams avoid chaos?

33:20

Speaker A

I don't have a good answer for this, nor do I think the IT departments at these major companies have a good answer for this. This is like one of those things that back in 2022, I sort of looked out ahead and I tried to like, project what was going to happen. And I had companies calling me and saying, hey, you know, should we spend the $3 million and build a proprietary, you know, internal LLM and things like that, or, you know, customize an LLM based on, you know, GPT3 or whatever? And it was so hard to provide guidance then. It's hard to provide guidance now. One of the things you have to be conscious of is whatever is available today that you may stop and spend tens of thousands, hundreds of thousands, millions of dollars building a custom version of proprietary thing internally. It's outdated by the time it's built. Like, and I can't tell you how many meetings I've been in where people are demoing for me the proprietary things that they built. And I'm like, oh, does it do this? Does it do this? Does it do this? They're like, no, no, no, we don't have any of those capabilities. It's based on 2.5 model. I'm like, what's what? As good as that? Like, all your employees are just going to go and use ChatGPT anyway. Like, you've neutered the thing. There's nothing left in it to do all the things they know are possible. So I empathize with anyone who has to make these decisions. It's super hard. And then you deal with the issue that. I don't know if we have a question about this, but I brought up the issue of credit based pricing on this week's podcast. And I can tell you I heard from some people who it's worse problem than I thought. Like, I was looking at it as a smarter X, like, we're a smaller company and like, oh, this is crazy of all these unexpected costs and like, we had to shut off our AI tool for five days. I've heard from people who have it way worse than us at big companies. And so you start Dealing with that. It's like even if you don't build your own thing, what if you're using the API or you're using a company that's doing credit based pricing on access to their agents and things? It is, it's such a dynamic space right now. I don't think anyone has a great answer for this. And like I said, I sort of like I feel for the ITT IT teams and then the like the department leads who are trying to maneuver through all of this.

33:37

Speaker B

But you can't wait, you can't.

35:48

Speaker A

And I mean we advise. One company that had like a massive play with Microsoft was their main provider. And so the IT team is working on this like massive build, millions of dollars and by the time it's going to be built like it wasn't going to do half of what we knew they could get from a ChatGPT team license. It's like, let's just go get you 20 licenses for ChatGPT. You could be up and running tomorrow. Like we can build GPTs for like everybody on your team by next month. And so that's what we did. Like it's like the IT team's doing their thing, taking forever, going through all this risk management which they need to do. Microsoft's building all this custom stuff, spending millions of dollars. Like IT doesn't do anything. And, and here our team is like a month later, they're running circles on everybody in the company because they just went and I, it's hard to advise against that approach. Like I, I think and it's why I get a lot of pushback from some of my friends who are big on, you got to build a proprietary thing. You can't risk data leakage into these model companies. Like I get it, but the, the, the opportunity cost of waiting and then building something that's obsoleted six months after you spent the $3 million. I'm sorry, like you better make a really, really strong business case to me to not go with the more dynamic route because I spin it up for 20 bucks a month per user when I can, I know for a fact I can generate 10 to 20x return on that 20 bucks a month at minimum. I'm sorry, as a CEO, you're, you're probably not winning that argument.

35:51

Speaker B

Yeah. Okay, number 12, generalists who can connect the dots are highly valued right now. How long do you think that advantage lasts and when does the pendulum swing back toward deep specialization?

37:22

Speaker A

Yeah, I think I asked, answered a variation of this question on the marketing talent one. And I think the guidance I roughly gave is I'm a big fan of generalists right now, for sure. I think I use the analogy of if my kids were heading into college, I would strongly encourage a liberal arts college because I think diversity of knowledge and experience is going to be very, very important. We talked this week on the podcast about the AI constitution that Anthropic has built. The lady who largely built that, Amanda Askel, she is a philosopher. Like, she created one of the most important documents in AI today, and in retrospect, could end up being one of the most important documents in AI ever. And she's a philosopher. Like, she's not an AI researcher. She's not a specialist in that area. She focuses on ethics and philosophy, and, like, that becomes super important. And so I just feel like we don't know what the future looks like. We don't know what remains uniquely human. And so it's really hard to specialize in one area of study or one specialized career path that the AI might be better at in 12 months. So I don't know. I mean, if you go back three years, I would be like, computer science. Like, just become a programmer, become an AI researcher. And I don't know, like, Anthropic doesn't seem too hot on them needing any of those things in three years that they haven't already hired. Like, I don't know they're gonna be hiring those people. So I don't know. I think Generalist is. Is more stable to predict, but it could swing at any moment, I guess. But I think it's gonna have to be very specific, the specialist thing. I'd have to think deeply about what specialist roles would continue to carry enormous value when the AI is, we have to assume, capable of being at or above expert human level at everything.

37:34

Speaker B

Yeah. When I was listening to the podcast and you were talking about that, you were talking about Amanda, and it made me think of. This was 2022, I think Macon, one of our speakers, worked for an AI tech company, and she was on a panel with Mike, and she's a linguist, that she's a trained linguist and was working for this AI company. I'm like, that's really valuable. So, I mean, there are certainly jobs and niches and things like that that really can work well for you. But again, that's a liberal arts degree, right?

39:40

Speaker A

Yeah. And again, I think, like, you just have to be able to make connections between seemingly unconnected things. Things like, it's what something I always look for in the hiring process is, like, who who makes the connections? Like, they read a book about philosophy or they read a book about, I don't know, whatever, psychology, and they connect the dots of like, oh, so our customer journey should. Actually, we didn't factor this in. We were thinking about the customer journey and like, the needs of the individual. And so I've always been a huge advocate of hiring people out of liberal arts schools because I wanted that diversity of knowledge. And you just never knew when something you learned in economics class was all of a sudden going to become super relevant to what we were doing in, in marketing. Like, so, yeah, I'm a big fan of generals.

40:09

Speaker B

Should have done better in economics before.

40:52

Speaker A

Oddly, I hated math, but, like, to the two classes I excelled at in, in college was statistics and economics. And I avoided math like the plague in college.

40:54

Speaker B

But stats, I love math. I love economics. I would love to take it now to see because at that point, wasn't interested. Okay, number 13, beyond listing courses, how confessionals show their AI forward on resumes or LinkedIn. Especially when it's hard to demonstrate things like creativity, empathy, and judgment.

41:05

Speaker A

We'll build something. Like if someone came in for an interview and they said, hey, yeah, I've built these five GPTs for my personal life. Like, you and I, Kathy, were just talking this morning about a trip planner. Like, both of us have planned trips in the last week, and we both use Gemini to do it. So it's like, oh, I built a GPT for that. I built one for my, my diet and exercise. I've got a GPT for my helping my kids in school. It's like, okay, cool. Like, you're problem solving. You're. You're using AI in an interesting way. And, and, and then like, here's one I built to help me with my job. And so I think that. And then the other thing that's becoming super accessible is building apps. I've talked recently about lovable and how I've been kind of experimenting with that to build some things. There are no barriers to building anything. And so that to me is like to be able to demonstrate that in an interview of something you built and be able to talk about that, why you built it, what problem you solve with it, how it helped you become more efficient, make better decisions, drive innovation in your personal life, your business life. That is the gold standard to me right now. I love to see you took courses and read books and have certificates, and that's all important, but the fact that you built something, especially if you didn't have to, wasn't required of you to build it, and you did it anyway. It's good stuff.

41:26

Speaker B

But I think with the empathy and judgment, you know, all those bullets that were on your resume or your CV before, you know, managing a team, doing X, Y, and Z, those certainly matter.

42:37

Speaker A

Yeah. And empathy, I don't know. I've honestly never thought about how do you demonstrate empathy? Do you interview for that? There's. I'm sure there's thousands of people way more qualified to answer a question about how you interview for empathy than I am. I'm just thinking out loud. But, like, we used to have this. Back in my agency days, we used to have this car ride test. We. We do a lot of car trips, and so we have to go visit clients, like, six hours away, and you have to spend six hours each way with, like, people in the car. And so we literally used to have this, like, would you. Would you want to take the road trip with them? Like, the road trip test. And part of that was just like, is it a fit culturally? Like, are they a good person? Does it. Caring person that you want to be around and, like, talk to? And, like, I'm not asking specific questions to get an empathy. I'm just trying to figure out, is it a good person? And that became, like, one of the big filters that me and Tracy would have. Like, do we want to make this higher? It's like, yeah, I'd love to go on a road trip with them. They're fascinating. Like, they. They're good people, and they have interesting things to talk about, but I think empathy.

42:48

Speaker B

That was my second day at work. We went to Athens. Is that.

43:49

Speaker A

That's true.

43:51

Speaker B

Was that all planned?

43:52

Speaker A

We did, yeah. You passed that test. Yeah. We talked about AI Academy, that whole ride on the way. We did, didn't we? Six hours back. I don't know. Like, maybe that's, like, what are they involved in? Are they on boards of nonprofits? Do they get involved in volunteer activities? Like, there's ways to get a sense of, like, is this an empathetic person who cares about others, not just themselves? Like, but I also feel like so often you can just tell, like, if you're a good judge of character, you know, within two minutes, like, is this a good person?

43:53

Speaker B

Right. Okay. Number 14. How can brands retain trust and authenticity when using AI generated video, audio, images, or copy?

44:23

Speaker A

I think you just have to be transparent about everything you're doing. The authenticity thing is critical. I've talked a lot about that on recent episodes of. I think it came up in the marketing Talent one. And by the way, we keep mentioning this marketing talent, AI Impact study. You can download it, we'll put the link in the show notes. It's. It is available now. It's a great read. It's one of my favorite pieces we've ever done. Yeah, I think if, if there's an expectation of authenticity, you have to show up as a human. I don't know how else to say this, like, and I use this example for our own stuff with AI Academy. You know, the question will obviously come up at some point or has like, hey, would we ever use AI avatars to, to create our stuff? And my answer is like, absolutely not. Like we're we. Like they are coming to our academy for us and it needs to be us. Yes. We could save two hours if I created an AI avatar and hey, gen or descript and had it read a script that ChatGPT wrote. But like, if you found that out as a customer, like you would feel like it was just cheapened. Or if you found out that this was my avatar sitting here talking to you and saying all this stuff unscripted, like it would just lose something completely. So I think the litmus test is, is there an expectation of authenticity that you wrote the thing, that you're presenting the thing, then you got to show up in an authentic way, even if it means you're spending more time and you know, the AI could save you time. Sometimes the time is the point. Like the fact that you invested the energy to do the thing is actually what makes it worthwhile. It's why human art differs from AI art. I'm not saying a art isn't creative. If you see AI art and it inspires you in some way and you find it to be creative, buy it. But if, if you want to know that the things someone created, and I say this from a personal place because my wife and my daughter are artists, that they spent three months doing the thing and this thing that happened in their life is what led them to make the thing. Like, I'm buying that over the AI thing 10 times out of 10 because it's authentic and I wanted it to be that way. I wanted a human condition behind it. Doesn't mean I won't see AI stuff that I think is awesome and that I share. So I think that's the biggest litmus test is if authenticity is expected, then you got to show up as the human.

44:33

Speaker B

So when Macy, who runs our community in social, when she started, we were talking about AI usage and where it made sense, where she was allowed, permitted, you know, based on our guidelines internally and everything. And it's like there are some places I expect her to be using AI.

46:56

Speaker A

Yeah.

47:10

Speaker B

You know, taking a blog post and creating social. Like, we. She doesn't need to write every single word of those things. What she can't do is have an AI bot running in our Slack community or things like that. Like, all the time she's saving. Doing this is helping her be more human and authentic in other places that are such so much more important.

47:13

Speaker A

Yeah. Another one of my favorite examples is Claire on our team. When she created like, it was like a minute and a half video for Macon 2025. And it was to build on this move 37 moment that I told the story of in my opening keynote. And it was all AI, but it was also all Claire. Like, she's the only person on our team who could have done it because she's a storyteller by trade. She has a background in video and graphics. And like, it was amazing and it was very powerful. But you also knew it was AI. We did not hide that at all. We actually featured the fact that it was AI and yet there's a human storyteller behind it that I thought was just like this perfect blend of AI and human, where that was. It was still authentic to me because I knew what she put into it to make it come to life. And it wasn't just a prompt and like, here's a minute and a half video. It was like, no, it was hours of her time and her whole life of being able to like go in and do the thing that you and I couldn't have teamed up and done. Kathy.

47:30

Speaker B

Right? And she had those artist moments of like throwing her hands up, walking away like, this isn't right. I gotta go back and fix it. So it was a whole process for her. Okay, our last question, number 15. If a company is considering a third party AI powered SDR solution, what questions should they ask vendors and what would it take to build this capability in house instead?

48:24

Speaker A

So this is actually a great one. I'm not taking like the easy way out. I would put that question into ChatGPT or Google Gemini or Anthropic Clog and just ask it. I mean, this is when I have to do things like this. I gotta talk to my attorneys, I gotta talk to my accountant, I gotta talk to like outside vendors. I'll go and be like, hey, what question should I ask this person? So one, I will say it's a great one to throw in there as A prompt and see what it recommends to you. But I think for me, the big key is you're gonna have to define like a workflow. You're gonna have to have trust in the solution. You're gonna have to know what guardrails are in place to make sure this thing doesn't go off the rails and destroy brand equity you have and brand trust that with people in your industry or your prospective buyers, things like that. So I would say you have to have the same level of confidence in this solution as though you were hiring a human to do the same thing. Like, what would you need to know of that person? What processes would you have to put in place? What guardrails would you want? What level of oversight would you want? Like, when is it okay for it to make a decision and do a thing versus needing human in the loop for approval to do a thing. So that's probably like the simplest way I would look at it is create this process as though you were hiring an SDR and then translate that over to you're hiring an AI instead. And what are the other limitations it might have or guardrails it might need in terms of how to build the capability in house? I don't know. Like, I'm actively trying to actually build this capability for us because we do not have SDRs. And my instinct going into it. So my hypothesis is that I think I can automate like 90% plus of what an SDR would do for us with heavy human in the loop for the last 10%, kind of that last mile. I think that most of the work, the admin level work of the SDR can likely be done by it. And I don't think it'd be overly complex for us to build it ourself. That. That being said, I don't know. That's. It's all like more of a hypothesis at this point.

48:46

Speaker B

Sure.

50:48

Speaker A

Probably depends on what existing tech you have access to, but there's great solutions out there for this stuff. I would obviously talk to your CRM company as a starting point. And then certainly there's third party technology. You could layer in pretty quickly while you're building your own capability if you wanted to.

50:49

Speaker B

Amazing. All right, that's our 15 questions.

51:05

Speaker A

Three minutes to spare for my meeting.

51:07

Speaker B

Well, we will throw a ton of links in the show notes and if everything from the talent report we put out the blueprint series, webinar links, AI for agencies. Macon, if you are interested in speaking, if you're interested in attending, we would love to have you at all of the things. And Mike and Paul will be back next Tuesday for the regular scheduled show.

51:09

Speaker A

Yeah, thanks for the great questions everyone and thank you, Kathy and Claire for coordinating everything. We will talk to you next week. Thanks for listening to AI answers. To keep learning, visit SmarterX AI, where you'll find on demand courses, upcoming classes and practical resources to guide your AI journey. And if you've got a question for a future episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions about AI.

51:30