The Artificial Intelligence Show

#150: AI Answers - AI Roadmaps, Which Tools to Use, Making the Case for AI, Training, and Building GPTs

67 min
May 29, 202511 months ago
Listen to Episode
Summary

This episode is the first in a new AI Answers series where Paul Roetzer and Kathy McFarland address questions from their live virtual events and classes. They cover 19 curated questions spanning AI mindsets, tools, team building, future-proofing, and strategic implementation for businesses navigating AI adoption.

Insights
  • AI literacy must be the foundation before any enterprise AI scaling initiative, requiring comprehensive change management rather than just technical implementation
  • The dominant AI platforms will likely consolidate to Microsoft/OpenAI and Google, with most enterprise experiences delivered through existing tech stack integrations like Salesforce and HubSpot
  • Companies should focus on mastering one or two core AI tools rather than spreading across multiple platforms, as the leading chatbots now handle 95% of most knowledge workers' needs
  • AI roadmaps should be dynamic 1-2 year frameworks that are re-evaluated quarterly, as new frontier models emerge every 3-6 months with paradigm-shifting capabilities
  • The next generation of workers will need AI skills as fundamental as typing, making AI familiarity a basic requirement for most professional roles within the next year
Trends
AI councils becoming standard in enterprises with C-suite executive sponsorship requiredShift from multiple specialized AI tools to consolidated platforms with custom GPTs/agentsAI literacy training becoming mandatory for all employees, not just technical teamsJob displacement acceleration requiring proactive reskilling initiativesAI policies requiring quarterly updates due to rapid capability evolutionEducational institutions implementing AI-plus degree programs layering AI over traditional majorsSearch marketing facing dramatic transformation within 18-24 monthsOpen source vs. closed AI models creating enterprise security considerationsComputer use agents introducing new policy and security complexitiesVoice interfaces potentially disrupting traditional search advertising models
Quotes
"I feel like we finally had arrived at a point where companies and more largely society is just understanding the moment we find ourselves in and the significance of what's happening. And so there's far more urgency from people to play a role in this, to have some agency in what happens next."
Paul Roetzer
"If you're a company that's growing 10% or less per year, there is no way you need as many people in that business in 18 months that you do today. If you do, you're not running the company very well."
Paul Roetzer
"You have to teach them to talk to it as an advisor, tutor, mentor, and not as a cheating aid."
Paul Roetzer
"Until now, it's been okay to work for a company that maybe didn't give you access to ChatGPT or Gemini or didn't train you on these things. But moving forward, you're going to start to put yourself behind your peers if you're at an organization where you cannot use AI in your job."
Paul Roetzer
"I think that learning to work with these tools, that learning to prompt that those are just going to become very fundamental to every job right now."
Paul Roetzer
Full Transcript
2 Speakers
Speaker A

I feel like we finally had arrived at a point where companies and more largely society is just understanding the moment we find ourselves in and the significance of what's happening. And so there's far more urgency from people to play a role in this, to have some agency in what happens next. Welcome to AI Answers, a special Q and A series from the Artificial Intelligence Show. I'm Paul Raitzer, founder and CEO of SmartRx and marketing AI institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of AI. But we never have enough time to get to all of them. So we created the AI Answers series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing. Whether you're just starting your AI journey or already putting it to work in your organization, these are the practical insights, use cases and strategies you need to grow smarter. Let's explore AI together. Welcome to episode 150 of the Artificial Intelligence Show. I'm your host, Paul Raitzer along with a special co host. Today, if you're a regular listener to the Artificial Intelligence Show, I would usually introduce Mike Caput at this time. But this is the first episode in a new series we are doing called AI Answers. So Mike will be back every Tuesday. Don't worry, the weekly format isn't changing. This is a new series, as I said, called AI Answers. And the premise here is every month Kathy and I do two free classes together. We do Intro to AI, which we started in fall of 2021. Kathy, if I'm not mistaken, we have had over 32,000 people register for that class. We have done 40. How many, Kathy?

0:00

Speaker B

49.

1:59

Speaker A

Nine. I think we're on 49. That sounds about right. And then we also do a scaling AI class every month for free. And these are both run through Zoom webinars. You can register for either of them, both of them, whatever you'd like. We have people who come every time. It's kind of wild to us. But those two classes are a key part of our AI Literacy project. So we're constantly trying to find ways to accelerate AI literacy for as many people across as many career paths and industries as possible. And so the intro class, which, the next one is June 10th. So we'll put it, we'll put links in the show notes. You can go and check these out for that one. The whole idea is to try and provide like this very fundamental understanding of AI very quickly. So like 30 minutes is the class I present for. And then Kathy and I do Q and a for 30 minutes. Well, we normally will get, I don't know, 12 to 1500 people registered each class. And we will get dozens of questions, sometimes over 100 questions. And they're all amazing. And we usually get to like five to seven of them. And then the same thing happens with scaling AI each month. That one, we get maybe five to 800 people each month that come to that class or register for that class. And then again, dozens of great questions. Now, scaling AI tends to be more for like director level and above. It's definitely more of a strategic approach. It's like five steps to scaling AI in a company. But again, dozens of questions. And we get to maybe five to seven of those questions in the time allotted. So we had this idea to take those two classes as well as our virtual events. So we have like our AI for B2B marketer summit is coming up June 5th. We will get hundreds of questions because that one's going to have thousands of people registered for it. And so we have all of these questions, which. One, we want to be able to answer as many as possible, thus the idea for this series. But two, what we realize is each of these live events is a window into what people are thinking about the challenges that they're having with AI adoption, the pain points they have, the strategic questions they have. And so by doing these AI answers episodes, which we're planning on about two a month. So the idea is we do intro AI and then the next week we answer questions from that one. The we do a scaling AI the next week we answer questions to that one. So this One today, episode 150, is based on the May 15th scaling AI class. We have taken questions from that course and we've curated them. I have not actually looked at them. I did this the way we do it live, which is Kathy picks the questions and asks them to me. And so that's kind of how we're going to roll here, is this is going to be like real time. Kathy asks me things and we do our best to answer as many as we can. So Kathy has curated, I don't know, about 20, 25 questions. Does that sound about right, Kathy?

2:00

Speaker B

I boiled it down to 19.

4:56

Speaker A

Okay, so we're down to 19. We'll see if that expands or contracts as we go. But I'm going to do my best in like, maybe, you know, one to two minutes, try and be as high level as I can and just get through as many of these as possible. So again, that's the whole premise of the series. Our weekly episodes aren't going anywhere. Mike and I will continue to be with you every Tuesday doing the weekly. But this gives some additional content for everyone and hopefully just a ton more value because for Kathy and I, we both say this all the time. Our favorite part of doing those classes each month is hearing the questions that people have because it does just truly give us a snapshot into where we are overall, just in terms of understanding and adoption of AI. And to watch those questions evolve over time is so fascinating. Like, Kathy, you see more than I do. But like, I mean, you start getting questions around like the environmental impact now and geopolitical stuff. And like six months ago that wasn't even on people's minds and now it just is like it's totally evolving. So.

4:57

Speaker B

And it depends on, you know, the audience mix that day. Yeah, it depends on what happened in the podcast that week and what people, you know. So there's so many variables that come into play when the questions are come out. It's like, that's such a great question. And sometimes we use them in our daily slack like question of the day just to get other answers from other community members, but there's still dozens that we don't that don't get answered. So this is exciting, exciting way for us to connect with our community and say, hey, we asked your or we answered your question on the podcast.

5:54

Speaker A

Yeah, that's true. We'll do our best to like let people know, but you know, again and these will be, you know, available. It's this is on if you're a regular listener on the podcast networks, Apple podcasts, Spotify, we also have these on YouTube. So we publish all of our podcast episodes on YouTube. Claire and our team does a great job of cutting up clips so you don't have to watch the whole thing necessarily. She puts them into shorts and breaks them off into separate videos. So yeah, we try and do this multimedia and wherever you're at, we want to kind of make the have the content meet you where you are. So yeah, always go back and check them out. And again, if this is your first time listening to the podcast, join us for the weekly every Tuesday. We've been doing that since October ish of 2022. We launched the weekly right before ChatGPT. So we are, you know, we've done 140 some episodes probably of that. So we do this all the time and we appreciate everybody who's joining us for the first time. And all of our lawyer list, loyal listeners who are back. So, yeah, with that, Kathy, unless you got something else to add up front, I would say I was just going.

6:20

Speaker B

To say that Claire, you know, this was Claire's great idea to do this.

7:23

Speaker A

Yep.

7:25

Speaker B

So she, after intro, after scaling last week, she's like, let's use these questions. So she built up this idea on how to do this, how to format it. And then the other day, I took all the questions through Clara's formula, put them into AI and said, okay, help me prioritize these. Help me do these in order so it makes sense for the listener. So it's like segueing from one thing to the next. And then I went through, as we always do, and made sure that it was actually correct, you know, and that it did flow cool. And did a little bit of cleanup. And I feel guilty taking episode 150, a milestone away from Mike kaput, but I'm happy to be here.

7:26

Speaker A

Well, that's funny. Like we had said, I don't know, a few episodes back, we were like, oh, we should probably do something, like, special for episode 150. And then it just works out that this idea sort of came up and it's like, all right, let's do it now. And, oh, that would be episode 150. It's like, all right, great. Launching a new series is, I guess, a good way to spend episode 150.

7:58

Speaker B

Mike gets 200 for sure.

8:15

Speaker A

There you go.

8:17

Speaker B

Okay, so I broke this up into five different sections. Mindsets, tools, team building, future proofing, and then where this is all headed. So that's kind of the block of questions I was thinking about. So let's jump in. Cool. Starting with mindset. You know, as much as we talk about capabilities, fear and misunderstanding are all, like, huge blockers in this. So how do you explain AI as a tool for transformation, not just another shiny tech, to someone who's unfamiliar with it or even a little bit afraid or wary of this?

8:18

Speaker A

Yeah, this gets into that bigger idea of change management that we talk about so often and not allowing AI to fall into the tech and data silos of the company. You know, really understanding the overall impact this can have, not only at the organizational level, but down to the individuals and their teams and the departments overall. So, you know, it really is just taking a strategic approach that's part of the thought process behind our scaling AI class is like teaching five steps. And so, you know, going through and building an AI council, developing generative AI policies, responsible AI principles, doing Impact assessments, building a roadmap, building an internal AI academy. So to truly approach it from a transformational standpoint, you have to have a framework for how to think about that transformation moving forward. And so that would be, to me, like, the first step is set some. Some very tangible elements to the plan and then a realistic timeline to get started.

8:49

Speaker B

That's one of the things I love about Katie Robert session at the AI for B2B marketer summit coming up. It's all about, okay, you can do all of these things, but if there's not someone in. In that spot or some change. Change agent or whoever in the organization, it's not going to go anywhere. So I'm really excited for her session to hear what she has to say about this.

9:48

Speaker A

Yeah. Someone to own it. And then, like, if it's not an executive that's owning it, that you need an executive sponsor that's going to give them the kind of support and resources they need to see that through.

10:05

Speaker B

Yeah. And sometimes a third party is the one that kind of cracks that nut when people are, you know, not really listening, you know, getting that outside perspective to come in and do that.

10:14

Speaker A

Yeah. Someone to come in and. Yeah. And I understand. I get the frustration, especially for people that work at big brands. It's like you could be saying this from the rooftop for six months to, you know, a year or more, and then you get the consultant to come in and say the exact same thing, and all of a sudden, you know, everything, everything starts moving. So I, I empathize with that. But you, you know, sometimes you just got to do whatever you got to do to get the ball rolling.

10:23

Speaker B

All right, here's another one that stuck with me. Do you see learning a. Learning to use AI effectively as the modern version of learning to type? Like, are we headed toward a world where not having AI skills leaves you behind?

10:45

Speaker A

I really like this one. It's super practical in terms of, like, some advice I've been giving of late, which is. Yeah, I mean, I think on resumes, when you're doing interviews, it really is where you're starting to say, like, okay, what's your familiarity with using, like, you know, Microsoft Word or like, project management tools? Like, it's just these fundamental skills that we require of any professional in any career. And I do think that learning to work with these tools, that learning to prompt that those are just going to become very fundamental to every job right now. I think the challenge is a lot of maybe the hiring professionals or the executives that, you know, guide which, you know, the kind of people they want to bring in, they don't know to build this into what they're looking for in job descriptions and skill sets. But I do think that within the next year, across most industries, familiarity with these chatbots, you know, your prompting ability, those things are going to matter and just be required for any job.

10:56

Speaker B

And conversely, you've even said, you know, if the company that you're interviewing with doesn't ask those sorts of questions, you might want to reconsider actually sticking with that.

11:55

Speaker A

Yeah. And again, I didn't look at any of these questions, so I don't know if we're going to get to any of this stuff in advance. But that is one of the things we say a lot, which is, you know, up until now, it's been okay to work for a company that maybe didn't give you access to ChatGPT or Gemini or didn't train you on these things. But moving forward, you're going to start to put yourself behind your peers if you're at an organization where you cannot use AI in your job. And so I do think not only is it important that we look for these skills as employers, but as individuals who have these skills or developing these skills, you're going to want to work for organizations that allow you to apply these skills and thrive and not spend the next few years not learning these things. I mean, it would. The best, you know, kind of closest analogy we've talked about is like, going back to the early 2000s and not allowing people Internet access to do their jobs. Like, you just. You would. You would fall behind if you didn't know how to use the Internet, for sure.

12:03

Speaker B

Okay, if AI is moving and evolving as fast as it feels, which it feels fast, changing itself and everything around it, how realistic is it to create an actual AI roadmap? Or are companies just stuck in reaction mode?

13:05

Speaker A

So I actually think it's moving faster than most people realize. There are weeks, like last week, you know, we talked about this on episode 149 of the podcast, where I'm not sure that I fully grasp how fast things changed last week and, like, what that means for the next 12 to 18 months. So I know it feels fast to everyone, but trust me, like, it's actually probably even faster than we're aware of at the moment. So in terms of an AI roadmap, my basic premise there, and that's what I teach in one of the courses as part of our AI Academy, is I think of roadmaps as, like, one to two years where you're starting to lay out priority projects and priority problems to solve. But the core of the roadmap is having a vision for the type of organization or department you want to build this AI forward mentality where everything is always being evaluated like, can we do this smarter? Is there a more intelligent process here? And so I think like you look at them in one to two year increments where you're starting to prioritize. You don't try and tackle too much at once. But then it's really probably like quarter to quarter and you want to be constant re evaluating saying okay, based on the new capabilities of OpenAI or, or Gemini from Google, does that change the project we had prioritized for Q3 of this year, it looks like that may have actually been obsoleted and now our current chatbot already does that and so now we don't need. So based on this, like, the way to assume is a new frontier model will move to the leader of the pack every three to six months moving forward. So whatever today's current state of the art is, which would probably be Gemini 2.5 Pro from most benchmarks. Realistically, GPT5 will probably come out sometime this summer and that would likely then leapfrog and then Gemini three will come out and then Llama four. And it's just going to keep moving this way. And a lot of times you can ignore it because it's not going to change your workflows in life. But sometimes entirely new paradigms shift where like a reasoning model shows up and it's like, wait a second, this just changes things or like the deep research product from, you know, Google or OpenAI. And so you do have to be dynamic with these roadmaps.

13:18

Speaker B

Yeah. And they don't even need to be that robust to begin Just get started on like what are some things we can tackle now and what's the implications of that and what, and what do we do next? I mean we've done that with some workshop clients that we've worked with that it's like you don't, it doesn't need to be this robust thing from the start. Just get something on paper, have it be fluid and just keep iterating.

15:34

Speaker A

Definitely. Yeah. A lot of times simplifying it is the way to go until you really start getting traction within the organization. You've dealt with all the, you know, the people in the company who maybe just don't want to do this stuff for different reasons. You know, fear of losing their job. It's abstract, like you got to Solve for that before you're truly going to scale. And that's why we always talk about like AI literacy is always the first step in an enterprise when you're trying to drive AI adoption and change management. Because until everyone understands what it truly is and the value it can bring, you're just never going to get where you want to go for sure.

15:55

Speaker B

Okay, so you have this AI roadmap, right? So who should know about it? Should the entire team know about it, keep it with the senior leaders, or is there a best practice on how to roll that out to the team?

16:31

Speaker A

Oh, I. So I'm a huge believer in total transparency and democratization of AI within organizations. And part of the reason is because I think a lot of the best uses of AI, the most high, the highest impact uses, are going to be developed by the practitioners who are actually in these tools every day and finding great prompts and use cases to apply them to. So no, I think the roadmap process needs to be involving people in it from the start because so much of the roadmap is personalization of use cases. So if you go get like ChatGPT team or enterprise or you know, get Gemini for everybody, you want to like involve them and find three to five use cases where they can build some Google Gems or some, you know, custom GPTs to help them in their job. So yeah, I 100% would like think about the roadmap as a collective experience for everybody in the department or organization.

16:40

Speaker B

And there's likely some folks that are more well versed in this and you realize who might be able to add some value to the whole strategy.

17:38

Speaker A

Yep. Yeah. And I think so much of what we do is all about empowering people. This is why, you know, we have a consulting practice under our SmartRx brand, but we aren't fully scaling it because we look more at empowering people through education and events in a one to many format as being more impactful right now. And so a lot of times we just guide organizations that reach out to us and want to work together. It's like, no, like, let's get you through the courses, let's get you people to the event, let's run a workshop. Because for me it's the more you empower the leaders and the practitioners to do this themselves, then you actually enable them through this dynamic phase we're going through where the tech keeps getting smarter all the time. Unless they understand it themselves and can find their own use cases and continue to improve, then they're always just going to be Relying on the outside consultants to do it and I don't think that's the way know you get ahead here. It's not how you build an AI forward organization. So yeah, yeah, I always about empowering everybody at all levels to do this in a responsible way for sure.

17:45

Speaker B

Okay, jumping into some tools and some strategy. This one came up a lot for small to medium sized businesses. Is it better to invest in ChatGPT or Microsoft Copilot? Especially when your users range from basic to advanced and some may benefit from agents.

18:50

Speaker A

So I don't personally have experience using Copilot. Anybody listens to our podcast knows that we talk quite a bit about Copilot, but it's usually just anecdotal based on what we're seeing. Reading, hearing experiences I can speak to. ChatGPT is a great experience. That's what we use for our teams. But we also use Gemini and Google Workspace so we have a collection of tools. I would say that from our State of Marketing AI report that we just released earlier in May, ChatGPT is the dominant choice, especially for small to mid sized businesses and individuals. So yeah, my general guidance is like if you choose ChatGPT or Google Gemini, I don't know that you can go wrong, honestly. And Microsoft Copilot is built on top of ChatGPT's models. It's just like a, a customized experience and user interface and stuff. So yeah, I don't know. I think you can, I don't know that you can go wrong with any of them. You just got to commit to really using them every day, experimenting with them and finding valuable use cases. Too many organizations just buy one of these things for their teams and then don't train them how to use it and personalize, you know, use cases for them. So my general guidance is chatgpt is a safe bet.

19:05

Speaker B

Yep. So speaking of Chat GPT, someone actually left this comment. My CEO won't pay for the license because there's a free version. So how do you make the case to leadership that a paid license for the team or for an individual is worth it?

20:23

Speaker A

I would first demonstrate super practical use cases that show the CEO the impact ChatGPT has or can have. Time saved, increased productivity output, increased growth, improved decision making, like whatever the CEO responds to, you know, those kind of triggers, I would talk in that manner. And then once you've created the value perception of what it enables, you know, I went from doing this project in 30 hours, I now do it in three hours. So I'm actually able to do these 10 other things I wasn't able to do before. Like, keep it simple, but like, show the value and then pull in the. By having a business account, here's what we get. Privacy, security, custom GPT sharing across teams. And like, what does that mean? You just got to make that business case in the language that matters to your CEO. And each of us have, you know, each CEO is different in terms of what matters. It sounds like this might be the example of a CEO that is probably very numbers driven if he or she's not willing to spend $20 a month. So I would talk in numbers, most likely, and show the impact it can have on revenue or costs.

20:38

Speaker B

Yeah, that could get paid for for a year. License in a hot second. Like with one. Just one example. One. One use case.

21:55

Speaker A

100%.

22:02

Speaker B

Okay, a practical question that I really liked. I'm using multiple AI tools, but each only does a few things well and the costs are adding up. How do I better train and support my agents so the company becomes more AI forward without overwhelming them?

22:03

Speaker A

Yeah, so I've found myself. I don't, like, I'm the CEO. I, I don't, I don't mind spending money on different tools. 20 bucks here, 20 bucks there. But I have definitely found myself. Yeah, I think last month I got rid of Perplexity. It's like I haven't logged into Perplexity in three months. Like, I just got rid of it. I think I got rid of my Claude license a week or two ago. And again, not because the 20 bucks really bothers me. It was just noise. And like, what I was finding is every once in a while I would duck in and I would try something in Perplexity or Claude, and maybe it would be like incrementally better for one use case. But then like, I would go to find the thread. I'm like, God, what chatbot did I do that in? Like, I can't even remember if it was my personal account, my business account, my chat GPT. And so I'm finding, like, for simplicity's sake, I get 99% of what I need for from Google, Gemini and ChatGPT. And so I'm centering my uses there and then trying to solve as much as I can through custom GPTs and gems. And now with like the coding ability that's baked into these things, like, I'm finding I need less and less tools. And so for my own personal adoption and what I would hope to guide our team to do is maximize one or two of the tools. And then if we need specialized use cases that require, like descript. We're not getting rid of Descript. Like, descript is fundamental to what our team does for audio, video production, the podcast, webinars, cloud, all that stuff. So that is like a. We absolutely are going to keep, you know, pushing on that and learning all those tools. HubSpot for us here, I'm like, we're going to have these other tools, but when I think about our core chat bots, they're able to increasingly do a lot of general things very well and so hopefully we can reduce our tech stack there.

22:17

Speaker B

Okay, so not everyone is a CEO like you who is very willing for us to go in and try new tools. So, you know, is that the best place to start, the best pitch is like, get the chat GPT or a copilot and start there?

24:07

Speaker A

I do think so, yeah. I think finding your core chat bot and then spending a lot of time with it and finding its full capabilities is going to solve a lot of problems up, you know, challenges, open up a lot of use cases and create a ton of value. So that is definitely like, I recently posted about that on LinkedIn. I'm actually going to probably work that into my B2B summit opening talk. I do think that if you just pick a chatbot and then you get really good at prompting with it and then learn to build a few gems and GPTs and then use a couple of the ancillary features that are baked right in, like Notebook LM and Deep Research. Like, I think you have solved more than 95% of your peers like what they're able to do with these things. And so, yeah, if you're a graphic designer or a video producer, like, you're gonna have specialized tools that these things can't solve for. But for most knowledge workers, regardless of your profession, that on its own is going to dramatically increase your efficiency and productivity and your creativity and thereby your ability to make an impact on growth and innovation and things. So I think that's for most people where you start.

24:19

Speaker B

Yeah, I think your point about, you know, we need to go in and use these tools. And if you're. The tools are so similar, I mean, obviously very different, but they're so similar that it really, since the beginning of time, tech's only as good as the person using it. So if we're mastering certain tools, let's just go with it. Keep going with that.

25:31

Speaker A

Yep.

25:48

Speaker B

Okay, zooming out a bit. In two years, how many gen AI platforms do you think will dominate the enterprise landscape? Are we looking at Many or just a few? I mean just looking at even AI, this isn't all AI, but MarTech Scott Brinker's landscapes at what, 15,000 tools right now?

25:49

Speaker A

Yeah, I think there's only a few, honestly. Like there's going to be a hundred thousand. Like his landscape could literally be a hundred thousand. Because everyone's a developer now, anyone can go in and build apps and that's only going to get easier in the, in the months and year or two ahead. So you could build apps for anything in the future. So I do think that it's largely going to come down to Microsoft, OpenAI and Google for like what Sam Alton would call the operating system of the future. I, you know, just, we've already seen Anthropic is shifting away from the chat bot competition. Like they've accepted they just cannot win in that game and they're pushing now very hard on coding and, and the safety and alignment side because they can't compete. And I don't see like XAI might try and compete on, you know, with Grok on the consumer chatbot, but they have no chance in the enterprise. So I, I do think that that's where a lot of it's going to live. And then I think a lot of people's experiences will actually be through like a Salesforce or a HubSpot where you're already doing everything within it and your chatbot experience just lives on top of, you know, that data layer and applications. So yeah, I think that's, you know, it's going to play out where you're going to pretty much choose between Microsoft, OpenAI and Google and then you're going to have your core chat experiences within your other tech, you know, essential pieces of your tech stack.

26:04

Speaker B

Yeah, and I do like the smaller tech because most of the smaller tech is built by people that had a problem that no one could fix, so they built it themselves. So they're just making other tools better, you know, the, the big tools better. Okay, and what about open source tools and models do you think, do you have any thoughts or concerns around using open source LLMs in the enterprise AI stack?

27:32

Speaker A

The, you know, there's a couple of ways to look at this. I, I am not like the most well read person on the benefits of open source despite all of my efforts to understand it. I have probably more concerns about open source than I do like my ability to advocate for it. But I wouldn't say that that is a highly educated opinion that you should share. I listen to both sides all the time and I'm constantly just trying to. I think my predisposition is that I'm not the biggest open source fan, but I try to always have a very open mind and that that might be the incorrect way to look at this. So the way I think about this is my concern with open source is I think that the understanding of the security risks is so low that there's going to be ways that these open source models are used in very nefarious applications that the business world is not anticipating right now. Now, that being said, the opposite opinion would be you're trusting OpenAI, Google or Microsoft to control your intelligence stack and they each have their own challenges. So if it's proprietary and closed weights, you can't do anything with it. And so I think that more developers prefer the open source side because then they have more control over that model. So I don't know, I think that it's going to be a mix. I think a lot of enterprises are going to build on open source because it gives more flexibility and they might actually be able to argue more security. And I think a lot of them are going to just accelerate faster by building on top of the APIs of, you know, Google and, and OpenAI and Microsoft and others. So I don't know. This is one I would, I would lean into. Like I am. I would not consider myself the foremost expert on open source versus closed and where I would lean onto your it, you know, people and your security people and you know, listen to their opinions as well and make sure you get a variety of perspectives on this. Because there are people on both sides. It's kind of like politics. There are people on both sides who are 100% convinced they're right. And I'm sitting in the middle. Like I can kind of see arguments for a lot of this on both sides. And so I'm not like a purist in either sense on either side of it.

27:55

Speaker B

Okay, let's dig into learning, literacy and leadership. So let's talk about AI councils. So if someone has an AI counsel or is looking to build an AI council, should the CEO be on the council? What roles should be involved in a council?

30:40

Speaker A

In an ideal world, the CEO is at minimum very supportive of the building of the council. If it's a small to mid sized business, I could imagine there's a potential role for the CEO to be a part of it. If it's a bigger enterprise. I mean, I'm the CEO of a small business and I don't have three minutes free every day. So I can't imagine like also sitting on a council that's meeting, you know, a couple times a month and stuff. But I would 100% want to be involved in that. So I don't know, I think it just depends on your organization. But the CEO, you know, AI forward, company CEO has to be 100% involved in the overall AI vision and strategy and needs to be supportive of whatever the council or councils are doing. And then ideally there needs to be a C suite executive sponsor or co chair or whatever. Like you have to have the C suite involved and the bigger the company gets, the more important that becomes.

30:57

Speaker B

So would you say someone from every department, people that are very AI forward, Some skeptics, like, should there be a mix of all those folks?

31:57

Speaker A

I mean in my mind the ideal is you have a leader from each of the core departments of the company represented. Because too, you know, early on in like 2024, as like AI council is becoming more normal to see in organizations, there was just too many times I would meet with companies and they, they wouldn't have invited the CMO or like anybody representing the marketing organization to the council. And I like, how is that possible? And I think what was happening in most instances is it was being treated as an IT know technical solution. And so it was living under like the CIO or that department and then legal was involved and they were, they were basically just treating it as like risk and security and technology and they weren't thinking about business challenges and business outcomes and products and services. And that to me is like a just a very narrow minded view of what the role of the council should be. Now if there was AI councils for the marketing team, they had their own and the like. Fine. But I think that you have to have department leaders involved because this isn't just a technical thing and especially when.

32:06

Speaker B

Marketing has so many use cases.

33:14

Speaker A

Yeah, it's usually the tip of the spear when it comes to value creation. Marketing, sales and customer success are like the first three things that you should be looking at.

33:16

Speaker B

Yep. Okay, let's say your company only has a basic AI policy. Where do you start to begin to use it to educate your team? Like what? How do you get that off the ground and enforced?

33:26

Speaker A

Yeah. So for an AI policy. Yeah, I mean I. So we're in the process of updating ours. Like Kathy had put together, you know, initial version of an AI policy for us back last summer, you know, I think in 2024 and just in the last couple days we were taking a fresh look at it and starting to realize all the complexities that have been introduced that we weren't accounting for in our own generative AI policy, such as like computer use agents, you know, there's. Those didn't exist until in the last five months. Like now you have agents that can take over your screen and see and understand everything you're doing. It's like we didn't anticipate that or like we didn't think it was going to happen enough that we need to put in the policy. So you're starting to look at, you know, usages like that. What you know, you're allowed to connect it to. If you go into ChatGPT right now and you know, and you have a, a team account that asks you do you want to connect to Google, your Google Drive, it's like, well, what are we allowed to connect these things to? So I think, you know, it probably helps to take a fresh look at these policies every three to six months based on where the models have advanced to what the new capabilities are and make sure that the policy is still up to date and that the team is still properly trained on that policy. You know, it probably needs to be part of all your onboarding processes for new hires you teach the interns. Like the policy is becoming more and more important because the risks associated with this stuff are becoming much greater. And so we really need to make sure people are doing this in a responsible way.

33:37

Speaker B

Yeah, and there's those slides you put. I think it's in scaling. Definitely one of your presentations where it's like 70% are using AI and 64% or for companies don't have a policy. So people are using the tools whether you tell them how to or not.

35:11

Speaker A

Definitely.

35:26

Speaker B

Okay. Once a company starts that journey with their AI policy and usage, what's a solid KPI to track AI literacy or adoption during those early pilots?

35:29

Speaker A

This is a good one.

35:40

Speaker B

It's a good one.

35:41

Speaker A

I, I actually have been thinking about this. So again, if you're kind of new to what we do, we have an AI academy we have for five years, but it's primarily existed of two core series and certification programs piloting AI for marketers. Originally it was targeted at marketers and then scaling AI, which is more for business leaders. We are completely reimagining all of that. So AI Academy is becoming far more robust. It's meant to be like a turnkey academy for any size organization that wants like a plug in AI academy with personalized learning journeys. And so I've been thinking a lot about as like, let's Say a company comes in and they buy 50 licenses for their marketing team, 20 for their sales team, 10 for their customer success team and maybe their lawyers and their accountants get some licenses. How do you measure the impact that that's having? So there are simple ways like completion, you know, certificates earned that someone has actually gone through and done the thing. But I think when you start to actually look at applying that learning, you got to then start to look at like, okay, what is the utilization rate of the co pilot license they have? So like if you start with someone who, and you put them into AI academy and they go through AI fundamentals and AI piloting and then they take AI for marketing and they kind of go through this journey and they had copilot before and they used it like 10 times in the month of May and now in the month of July they've used it 100 times and they've built five, you know, co pilots. And so that's one way like where you can start to kind of, then you can level down to like, well, what is their efficiency, what is their productivity? Which may require establishing some benchmarks. Like here's the thing they do in their job all the time and they now do this three times faster than they did before.

35:41

Speaker B

Right.

37:31

Speaker A

So it's something we're definitely going to be working on. We're actually going to hopefully be building into our academy where you'll be able to do this, like do an assessment up front and then take an assessment as you go that can kind of help you measure progress. But yeah, I, I, I'm thinking about that a lot as I'm building the courses that will make up sort of Academy 3.0 as it when it comes out.

37:31

Speaker B

Yeah, because comprehension is one thing, but putting it into action is completely different.

37:52

Speaker A

Yeah, we, so our, our whole thing is we want to drive transformation, whether that's individual transformation or business transformation. And so you need to be able to measure how you are transforming as a, as a professional or as an organization. And so a lot of what we're doing is really starting to think about that now, like separately for yourself. I mean honestly, you could literally just go into chat GPT and say, hey, I'm, I'm going to start investing a lot of my time in AI literacy. I'm going to be taking courses, going to events, or I'm doing this for my team. We're going to be investing X dollars per team member this year in literacy. How could we measure their improvement? And you'll probably get some like really cool ideas. Straight out of chat. GPT.

37:56

Speaker B

Yeah. Okay, here's a little time for you to reflect.

38:39

Speaker A

Okay.

38:42

Speaker B

If you were building Marketing AI Institute all over again from scratch today, with what you know now, what would you do differently?

38:43

Speaker A

Oh boy, we're getting. Yeah. So I don't know. For people who don't know the context, I started Marketing Institute in 2016 as a spin off business unit of my marketing agency at the time. So I sold the agency and 2021, we launched our Mekon event, the, our flagship in person conference in 2019. But the institute lost money for six straight years. Like, we never turned a profit from 2016 until March of 2023. So that's kind of like the quick background the Institute. I don't know. Like, I, I will say, like, overall in life, I am not one to look back and think I would, how I would do stuff differently because I'm generally, I try and focus on the present and like, this is where I am and I'm, I'm usually generally very content with where I am. And so I would say as a business leader, I kind of feel that way. Like, it was a hard journey, it was a long time talking into the wind where like, people weren't paying attention or like, you know, understanding the significance of what was going to happen. And I don't. We tried a lot of stuff. Like, we experimented. We were willing to fail a whole bunch and I would do that all again because, like, we're here and I'm content with where we are and we have this opportunity to hopefully make a big impact on accelerating AI literacy. And we've developed amazing partnerships and we have a great team. And so I don't, I don't know. Like, I, I don't, I don't. I think I'm, I'm happy with what we did and the decisions we made. And I'm so focused on the future that I just don't honestly ever go back and think I would do something different or, you know, try something else. Yeah, I don't know. It's a really good question. But I think just for me personally, like, I very rarely do that. I don't look back. I basically just try and live every day personally and professionally to not have regrets. And so if I have an idea, I'll, I'll pursue it. If I have a vision for something, like, I'll go after it. Like, I don't fear failure at all in business. I just try and be strategic enough financially to give us the freedom to fail. And I've, I Mean, like literally my, my marketing agency blueprint book from that I wrote in 2011. The. The final chapter was Embrace Failure. So I think I've just always kind of approached business that way.

38:49

Speaker B

I like it. Okay, this is a little inside baseball, so you might need to explain a little bit of this in the context of Jobs GPT and AI exposure levels. How do you actually bridge the gap between current capabilities and future roles? And what is the smart move for career future proofing?

41:20

Speaker A

We'll put the link to Jobs GPT in the show notes. So if you're not familiar with it, if you go to SmartRx AI and just click on tools, you can click on jobs GPT. I first built that in summer of 2024. And the idea was to help people that you put in your job title or your job description. And it would help you find use cases for AI, specifically, you know, language models, chatbots. But it was also meant to accelerate business leaders being proactive about the impact AI would have on jobs and the displacement of their teams. And what I found, and continue to find is there's a lack of comprehension about what these models are capable of and what they're going to be capable of in the very near future. And so I devised this impact, like this AI exposure levels, a key that basically today has 11 levels. And it just kind of walks through like, okay, today they can do text in and text out and they can connect to other systems. And now they're getting video capability and image capability and advanced reasoning capability and persuasion capabilities and, you know, acting in the digital world like these computer usage, like all of these things. We knew we were coming. Like the labs told us, this is what we're working on. And so my goal was to try and actually train a GPT that could project out the impact this would have. So if you put chief marketing officer, customer service representative or sales director into Jobs GPT, it would actually say, well, here's the ways you could be doing it, here's how AI is going to impact you, here's time you could save and then you can say, okay, but I'm a little worried that as it starts developing the ability to produce videos that I as a videographer may not be as needed. And you can talk to it about that. And so I think that. And then, and then this year in April, I introduced a new, a new conversation starter in it where you can just talk to it about future jobs. Like, here's my major in college. I'm not sure what I'm going to be doing I'm a writer. I don't know what like writers are going to be doing in 12 months. And it'll try and work with you to devise like what the future of that career path could look like. So yeah, I don't know. That's what jobs GPT was meant to do was like be a conversation starter and get more people thinking about the near term impact that these things were going to have on jobs. So we could be proactive as business leaders to reskill and upskill our people to be better positioned for the future of work.

41:36

Speaker B

Which you went through when we were looking at hiring for all these folks that you, you know, what are the jobs? What could AI potentially be doing? How can we make sure we're not going to hire people and not need them in 12 months?

44:06

Speaker A

Yeah, and that's like at a very tangible level, like an example of how you would use it. Like we were hiring a collection of people. And so each job description you look at and say, okay, like this is a full time employee today. But knowing what these models are going to be capable of and when GPT5 shows up, we can reasonably assume what its capabilities will be. Is this still a full time job? Like do we, do we still need this? Or if we're building out a customer success team, what may have taken, you know, five or ten customer success people a year ago, maybe we can do it two or three. Now we have the luxury of building from the ground up so we don't have to worry about, you know, legacy teams that may not need as many humans in the future. So I get that we're sort of in the easier approach here, but my goal was for companies that aren't that have legacy teams. The more proactive you are, the better chance you have of being prepared as this impact truly starts coming. Because it's coming one way or the other. Like it's going to happen. We can't just ignore it and then deal with it when it's here. I would far prefer people were realistic about the impact it's going to have and then started preparing people, people. And even if it's not, hey, we're going to reskill you in AI. We're going to provide this AI education and training and we hope it's going to be here. You're going to do this and bring this to life. But if nothing else, we're going to make you more employable in the future. Because I don't think any company right now can promise that two years from now they're going to need the same level of staffing. I honestly think the only way you can do that is if you are extremely confident in your growth trajectory. If you're a company that's growing 10% or less per year, there is no way you need as many people in that business in 18 months that you do today. If you, if you do, you're not running the company very well. And that's just the reality.

44:14

Speaker B

So you talked about reasonably assuming and what might change with roles. Not everyone reads as much as you do. Like, what should they be doing or paying attention to to be able to keep up with that and be able to frame this in a way that actually makes sense for their business?

46:07

Speaker A

I don't know. I think that's a big part of why we introduced the AI Literacy Project and I now mentioned that a couple times. If people aren't familiar with it, we'll put the link to the literacy project in. But the whole premise was we are going to try and do our part to provide as much free education and inspiration as we possibly can through our podcasts, through the newsletters, through blueprints we publish, through free classes. Like all these things, this AI Answer series so that you can connect the dots in your company, your industry, your career path. I. What I encourage people to do is experiment with the tools every day and figure out how you best learn and just go do it every day. Like, don't get overwhelmed by the scope of all of this. Just commit. Like, okay, I'm going to listen to like three podcasts a week. If you like podcasts. I'm going to go get like a couple books because I love reading. I'm going to take courses because I learned over there. I, I need to be inspired by other people. I want to go to some events. Like, just do it. Like find the thing that motivates you and go do it. I totally understand. Very few people are going to want to consume as much information about AI as I do. Like, I some days don't want to do it. Like literally yesterday I was power washing my basketball court in the backyard of a spent basketball court for three hours and I just listened to podcasts for three hours. And I, I wanted with all of my be, my, my being to just listen to music for three hours yesterday because my brain was fried. But I was like, I have, I have to listen to the podcast. Like, that's me though. Like that's what I have committed to do in my life. You just need to find out what your interests are and which thread of all this, you want to, you know, really push on.

46:21

Speaker B

Yeah. Just think you're doing it for all of us, so we don't need to do that.

48:07

Speaker A

Yeah. Total side note. The text is my family. The. My one regret. I said I don't. Regrets. My one regret in life is that I didn't discover power washing until my mid-40s. That is, like, the most satisfying thing you can do outside of, like, cutting your grass, because it's like, this immediate gratification. I know this has nothing to do with AI, but, like, sometimes we need the more human side. And, like, if you just need to, like, decompress, buy yourself a power washer and pick, like, some cement in your driveway or your back patio and just go clean it.

48:10

Speaker B

I just got a new edger on Saturday.

48:44

Speaker A

Oh, edgers are great too.

48:46

Speaker B

And I've, like, walked out to my tree lawn like, four times and, like, just stared at, like, that beautifully. Perfect line.

48:47

Speaker A

I. Every day I cut my grass. Like, every time I do it, I'll stand there to my kids, like, doesn't look amazing. Yeah, that looks great. Claire, by the way, you could cut this part of me.

48:52

Speaker B

No, I say keep it in.

49:01

Speaker A

Okay. Talking my sanity, for sure.

49:05

Speaker B

Thinking about this next generation. You have young kids. I have moderately young kids. What courses should kids in school be thinking about as they want to prepare for an AI infused world? But really, what should colleges and schools be thinking about to help these kids?

49:10

Speaker A

Step one is teach the teachers. You have to drive AI literacy among the gatekeepers of knowledge and experiences in the classroom. So if we're not empowering the teachers and professors to responsibly integrate AI into the classroom, this will fail in your school. And there's very little debating that, like, we have to do that. The second step is you have to think about AI as a layer over everything else. Like, I'm sure that there is an argument to, like, have maybe an AI major down the road. That's like, a dedicated thing, but I don't know. I would have to think about that. Like, what the justification would be for that. I think, like, we Talked on episode 148 or 149 about, like, bowling Green State University now has, like, an AI plus program where you can get an AI on top of whatever your major is. I love that concept where we're just teaching the AI layer over whatever. So if you want to go into philosophy or journalism or business or sales or psychology or sociology, like, understand how AI is affecting that discipline, that career path. And so I just. I feel like There needs to be an AI 101 everywhere. Like every freshman in college. I said this back in 2018, every freshman in college take AI101. And then I think there just needs to be elements of AI layered over everything else. And I think it needs to start very early. Like I, I, I'm a huge advocate for teaching AI responsibly at the middle school level. Like, my kids are going into seventh and eighth grade. God, I can't believe their school's done this week. So they're, they're going to seventh and eighth. And I am very strongly evaluating how to more aggressively prepare them for the real world. Yeah, because I don't, I don't think it's fair to, to ask their schools to do it fully right now. I understand there has to be this transitional period. So as a parent, I am going to take it upon myself to do everything I can to prepare them for the future that I am fairly confident is coming for sure.

49:27

Speaker B

So back to the AI plus at Bowling Green in what just January of 2023, people were like, I'm going to be a prompt engineer, that we need prompt engineers. And it's like we're all prompt engineers at this point. So the AI plus seems like it makes the most sense versus just AI is as a whole. What does that even look like?

51:33

Speaker A

Yep. And I do, I get asked all the time about like computer science degrees. I, I'm not in the camp of coding, doesn't matter, programming, doesn't matter in the future. I'm, my son at, you know, 12, is loving learning to code and he's doing it, you know, self motivated to go learn coding. And I look at and think, man, you like, learn to do repetitive things like, you know, you, you just grind, you work hard through something, you solve problems, you learn the rewards and the benefits of working hard on something, coming up with solutions and making decisions. So to me, like coding is all of those things. And if he told me I want to go into computer science, like, I'd be like, great, do it. But I, I think you need to get like a liberal arts, like I would really like it if you also diversified your background to some of these other areas. At least take classes in all these other areas. And so I, if I had my choice, like right now, I think liberal arts degrees in, in theory will take on far greater value, I think. So I would actually be bullish on liberal arts colleges that teach like a diversity of knowledge.

51:51

Speaker B

Yeah. So my son's in it and he's been sitting through all these meetings as they're doing all this stuff. And he's like, why do they keep asking me? And I'm like, because you know how to talk. Yeah, you can communicate. And that's a rare breed right now. So good. Good luck. Good job.

53:04

Speaker A

Yeah. Yeah, that's good.

53:18

Speaker B

Okay, kind of going off of the parenting and kid thing. What are a few things you would suggest to help teenagers use AI to accelerate learning without relying on it to do the work for them?

53:21

Speaker A

You have to teach them to talk to it as an advisor, tutor, mentor, and not as a cheating aid. So this is. I mean, my daughter at 13 spends way more time, you know, using AI for. For a variety of purposes. And that's basically what I always teach her, is like, I don't mind if you use it to help with that. But you. You have to say, I am trying to understand this topic. Like, I want to learn it and deeply understand it. I'd like you to help me learn it. Like, don't go in and say, write this for me. And so that's, you know, I don't know how you do that without being super intentional, though, because we were all kids. You all had other things you wanted to do than do the homework. And if this is the greatest, like, shortcut in history to actually doing the work. And so we still have to develop the next generation who wants to learn, wants to better themselves, you know, wants to develop critical thinking, become strategic. They don't want to just ask for answers. But that's why I think teaching the teachers is so critical, because if these kids are raised to believe they're cheating when they're using AI, even if it's being used in a responsible way, that's a disservice to them because that is not how it's going to be viewed when they're out in the real world. And so I think we just. We. We have to teach responsible use, and that would mean as a. A learning aid.

53:32

Speaker B

And you did do the kids safegpt, which might help answer some of those questions.

54:58

Speaker A

Yeah, I used that this week. I should do that. Bailen was gonna be so pissed at me. We were in the car, we were going in. Sorry, kids. Safe GPT is a thing I use to help parents understand the risks of the different platforms. It's free. You can go to SmartRx AI, click on Tools. It's there. Helps you, like, understand platforms, talk to your kids and write guidelines. So we're in the car, and I'm like, summer, starting next week, we should probably Talk about screen time and, like, digital detoxes and set some guidelines. And he's like, yeah, no. And I was like, let's have a talk with kids. Safe GPT. So as we're in the car driving, my wife's driving, and I'm like, I'm in the car with my son and I'd like to develop some guidelines for this summer. And if my kids could have got out of the moving car, they would have. Like, it was hilarious because it was giving me talking points. I was just reading them. It's like, this is great.

55:03

Speaker B

So no conclusion from that yet.

55:57

Speaker A

I got great guidelines. He's not bought in yet.

55:59

Speaker B

All right, we're down to our last two questions. We are right on track.

56:04

Speaker A

Nice.

56:07

Speaker B

Okay, when it comes to building GPTs, is it better to create a specific GPT for each job task or one mega GPT that does content, strategy, internal reports, sales writing, and all of it? Oh.

56:07

Speaker A

So I built co CEO GPT. We can drop the link for that. There's a webinar you can watch From December of 24 that teaches you how to do it and has the template to the instructions to build your own. You can build it for whatever your job is. Co cmo, co writer, co SEO professional that works really well. It's designed to do a general set of things like build plans and talk strategy and complete tasks. I use that all the time. But then I also do build a lot of just like custom GPTs for very specific things. And I would say my inclination here. And you can add your two cents, Kathy, because you also, you know, do with this all the time. I think the very distinct use cases can be very helpful to people to like, narrow in on what is the value I'm going to get from using this. What exactly does it do? So, like Jobs GPT. If I just called it Future of Work GPT, it'd be like, what do I do with this thing? Like, it's super general and like, okay, it does. And it might be able to do all those things regardless. But calling it Jobs GPT is like, okay, this is specific to the impact on jobs or like, what do I do with my job? So I don't know. I think that some of it may be perception based, but like, all of it is about adoption. And so I would say whatever the path is that is clearest to help people understand the value it's going to create and drive adoption and utilization. That's what I would go with. But I think, like, specific naming of GPT is probably the Way to go.

56:24

Speaker B

I agree. Yeah. The ones that I've built have been for. Well, I have a couple that are very specific for particular use cases and I have one that's a broader strategic one, my Paul GPT.

57:54

Speaker A

Yeah.

58:03

Speaker B

That I built that it's like helps me just think bigger and think about things in a different way. Thinking like you, thinking like your CEO. So that could be across anything.

58:03

Speaker A

Yep.

58:14

Speaker B

But I guess it is specific. Specific in what I want it to tell me.

58:14

Speaker A

Yeah, yeah, yeah. I think I would always start with very distinct things. Like if you're trying to understand the value yourself, I would pick something you do all the time and just build it, you know, because it's basically like if you're going to be repeating the same prompt every time, like I'm building one right now or I built one for building Academy 3.0. So like I already have my co CEO, it knows our business model, knows all the things we do. But like I want one that I specifically trained on functioning as like an editor and thought partner for the build out of these courses. And so you know, I'll go in, I develop my outlines, I'll give it to Academy3GPT and say assess this outline. You know what I'm trying to achieve. Like you get who the audience is. Like what do you think about the outline? What am I missing? So in that instance, like having a very specific GPT or GEM in this case is really helpful for me.

58:17

Speaker B

Yeah, I agree. Okay, you talked about this, I think on episode 147 or 148, but search, what do you think AI will do to the search marketing industry, especially paid search? We've seen some shifts already. What's happening now and what's on the horizon.

59:09

Speaker A

I think this space is going to be pretty dramatically, is going to look dramatically different in like 18 to 24 months. I've talked to and listened to some of the leading minds in search and I've yet to find anybody who speaks with very much confidence about what's going to happen. I mean, just like last week we got AI mode from Google and then they had some research that showed that apparently people are still clicking on a lot of links in AI mode and not impacting their paid side. I think they even said like they're getting higher click rates in some cases. So I don't know. I think until this all shakes out, until we see like if ChatGPT is going to have a paid function, they're obviously coming after Google on all fronts until we know what their paid function Looks like. And until we see how consumer behavior changes around search and whether the next generation ever even goes into a search engine or if they literally just talk to ChatGPT or Gemini all the time and that's it. And then, like, how do you serve up ads if voice becomes the dominant interface? I have no idea. I think this is, you know, of all the conversations I have, this feels like one of the largest unknowns about what happens in search and the impact that then has on paid search and our strategies, you know, as builders of businesses and marketers. Like, how does that channel change for us? I don't know. And I'm pretty convinced the people at these tech companies don't know. So I would say it's a. It's a wide open space. I would be paying very close attention to if you're impacted by it, definitely.

59:25

Speaker B

All right, I have one more. I love asking this question at the.

1:01:04

Speaker A

End, you just want to get to 20. You had 19. You just want to be able to say you did 20, maybe.

1:01:06

Speaker B

But also, like, I like ending on a super high note. And that search one didn't seem like the highest note.

1:01:10

Speaker A

Give it to me.

1:01:15

Speaker B

What. What excites you? Like, what excites you this week about AI or something you're working on or something you're hearing about?

1:01:16

Speaker A

I would like at the moment it's more macro, but I. I feel like we finally had arrived at a point where companies, and more largely, you know, society is. Is just understanding the moment we find ourselves in and the significance of what's happening. And so there's far more urgency from people to play a role in this, to have some agency in what happens next. And I think just seeing people being more proactive about transforming their own careers and then, you know, the conversations we have every day with company leaders who now understand the importance and are wanting to make an impact. And I think that there's still a lot of optimism. Like, we saw that in our state of marketing. I report where they said, like people overall, the sentiment is still optimism that they think the future can be bright. While they think it's going to impact jobs, they overall think about the positives. And that's kind of how I choose to think about it. Like, there's a lot of ways this goes wrong, and if I dwell on those things, then I don't sleep so well at night. But I think every day I try and just focus on the fact that I think for the most part, we are going to be able to guide the impact this has on business. And society. And I'm generally optimistic it's going to go really well if we are intentional about it. And I feel like right now we're doing everything we can to try and drive that positive outcome. But I always feel like we could do more. So I don't know, I guess I'm just excited about the unknown ahead in a good way and like that means we can kind of reimagine everything and that's really exciting to me.

1:01:27

Speaker B

Yeah, that's what I've said when I, I'm working on a presentation for next week and I, I like to say that, you know, we have a responsibility and opportunity to be on the right side of all of this.

1:03:16

Speaker A

Yeah.

1:03:25

Speaker B

And there's a lot of really great things that are happening and we want to make, we need to make sure that we are stewards of all of that.

1:03:26

Speaker A

Yeah. Yeah. And I, I think there's a lot of opportunities for everyone listening to do the same. Like so many organizations are struggling to still even understand this stuff and develop plans, build councils. So I think just the chance to see people at all phases of their career emerge as leaders to drive a human centered approach to this and then just like hear the stories every day. Like the messages I get on LinkedIn, the stuff you hear firsthand from people, the text messages I get from people who like or altering their careers or doing these incredible things in their organizations to try and drive change. That's inspiring to me. So I hear people kind of picking up this message and like going and doing really incredible things. So, you know, I think that gives us motivation to, you know, really keep pushing with what we're trying to do because we can't do nothing. Like the alternative is we don't try and make this impact and we don't try and push for the positive outcome and then it's our fault if, if we get there and it didn't happen. And I said it collectively like our fault overall. Like we just, we can't sit around, it won't go well if we don't all collectively do more to make sure it goes that we drive AI for good.

1:03:32

Speaker B

Excellent. Well, if you answered one of these or if you asked one of these questions on our scaling AI, thank you for your great question. And if you are just listening to all of these, as you can see, you are not alone. So a lot of opportunities to join our community and stick with this group and stick together so we can answer them together and learn together. And Paul, thank you as always for, for this, super helpful.

1:04:52

Speaker A

Thank you and just a quick reminder, AI for B2B marketers summit, June 5, that is a virtual event. There's still time to join us there. Intro to AI the free class we talked about. The next one of those is June 10th, and then the next scaling AI free class is June 19th. So those are all coming up. And then we'll do one of these AI Answers episodes after each of those. So we'll be answering a lot of questions.

1:05:13

Speaker B

Do we will.

1:05:37

Speaker A

Three weeks.

1:05:38

Speaker B

And if you really want to do this all in person, we'll see you in October for Macon. We would love to see people at Macon.

1:05:39

Speaker A

There's like 19 weeks away, which is crazy.

1:05:45

Speaker B

Yes, that is crazy.

1:05:47

Speaker A

All right, thanks so much. Great job. I think it's your first co hosting. Oh, no. We've done a couple of question and answer sessions together. All right, well then, welcome back. And welcome to the first co hosting of AI Answers. All right. And thanks everyone for joining us. We will be back with our regular weekly next Tuesday. Thanks for listening to AI answers. To keep learning, visit SmarterX AI, where you'll find on demand courses, upcoming classes and practical resources to guide your AI journey. And if you've got a question for a future episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions about AI.

1:05:49