#156: AI Answers - Data Privacy, AI Roadmaps, Regulated Industries, Selling AI to the C-Suite & Change Management
This episode of The Artificial Intelligence Show features a Q&A format addressing questions from their Scaling AI class, covering topics like data privacy, AI roadmaps, change management, and practical implementation strategies. Host Paul Roetzer and co-host Kathy McPhillips discuss how organizations can overcome resistance to AI adoption and provide guidance for professionals looking to drive AI initiatives within their companies.
- AI adoption requires a change management mindset rather than just a technology implementation approach, considering all stakeholders including HR, legal, and end users
- Most companies still lack basic AI education and training, creating opportunities for proactive individuals to lead initiatives regardless of their position in the organization
- The current period represents a golden age for entrepreneurship due to AI's ability to provide on-demand advisory capabilities and enable smaller teams to accomplish more
- Successful AI implementation requires personalized use cases rather than generic applications, with organizations needing to show specific, relevant examples to drive adoption
- The speed of AI output is creating new challenges around human oversight and verification, requiring organizations to rethink project management and review processes
"It's a golden age of entrepreneurship and one of my great hopes is the way we offset the job losses that are inevitable because of AI is we create so many more businesses. And yes, they will need fewer people, but there's more of them"
"I think that more often than not I've seen that play out where the people who just think, why not me? Like, nobody else doing it, I'll just go do it. Like, that's how most of the things in my career I created happen."
"The AI can output things so quickly, you can run so many deep research projects, build so many strategies, but at the end of the day, those things still need humans to verify them, to critically analyze them to make sure that it's the right approach"
"I feel like now with AI you have on call like a high level advisor in any discipline of business you want. Yes, it's not perfect and it makes stuff up sometimes and hallucinates a bit. But like you can talk to it about legal questions and financial questions and operations questions"
"The organizations that approach it with a change management mindset have a far greater chance of not only driving higher adoption, but getting dramatically greater value out of that adoption"
It's a golden age of entrepreneurship and one of my great hopes is the way we offset the job losses that are inevitable because of AI is we create so many more businesses. And yes, they will need fewer people, but there's more of them welcome to AI Answers, a special Q and A series from the Artificial Intelligence Show. I'm Paul Raitzer, founder and CEO of SmartRx and marketing AI institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of AI. But we never have enough time to get to all of them. So we created the AI Answers series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing. Whether you're just starting your AI journey or already putting it to work in your organization, these are the practical insights, use cases and strategies you need to grow smarter. Let's explore AI together. Welcome to episode 156 of the Artificial Intelligence Show. I'm your host Paul Ratzer along with my co host Kathy McPhillips, our chief growth officer. Today is our third episode in the new AI Answers series. So this is a series based on questions from our monthly Intro to AI and Scaling AI classes that we do along with we will do these for some special virtual events like REI for B2B, Marketer Summit, AI for Agency Summit, AI for Writer Summit, things like that. So the whole idea here is we continue doing our weekly episode that Mike and I do each week breaking down the big news and AI for the week. And then two, three times a month we mix in a Thursday special edition that is just answering questions. And so this one is actually as a result of our June 19th September Scaling AI class that Kathy and I did. That was the 9th edition of that Scaling AI class. We started that in 2024. The next one of the live Scaling AI classes is actually going to be in August. We usually do it every month, but I'm in the midst of recording about 50 courses for AI Academy so I could not find a free day in my schedule to do it. So we are going to go with August 21st is the next scaling AI class. So again, the questions that Kath and I are about to go through are questions we actually received from attendees to last week's class. If you would like to join the next one where we go through five essential steps to scaling AI, you can go to scalingai.com and just click on the link about the next upcoming webinar that is a free webinar and we'd love to have you join us. I just usually do about a 35 minute presentation. We do like an ask me anything at the end, but we normally get to maybe 5 to 10 of those questions. So today Kathy, I don't know how many we've got, but I think it's around 20 is the norm. So I'll turn it over to you and kind of introduce it from here. Oh wait, I'm supposed to do the read through on the. The episode is brought to us by Mekon. I forgot about that part. So Macon 2025 is our flagship in person event. This episode is brought to us by Macon 2025 that is happening October 14th to the 16th in Cleveland. Cleveland, Ohio, that is. We would love to have you there. The majority of the agenda is live as well as a good portion of the speaker lineup. There's still some announcements to be made. You can go to Macon AI. That's M A I C O N A I Prices go up at the end of June and I apparently have been forgetting to give this to people on the podcast. So I'm going to start giving to people now. We have a promo code, POD170. So you can get a hundred dollars off your macon registration with pod 100. Okay, now Kathy, I will turn it over to you to introduce this session.
0:00
So prices will go up June 27th. End of day June 27th. So if you are interested, you can should do it then because then you'll get a little bit of a savings and you can use this pad 100 for even more.
3:52
And we're recording this on June 24th. You're getting this on June 26th, so you need to hurry. You need to go do this now.
4:02
Okay, so as Paul mentioned, we have these Scaling AI and Intro to AI courses every month. And my favorite part, aside from Paul's amazing presentations, is just hearing the questions everyone has. Because over the course of 50 intro to AIs and nine scaling AIs, the questions are just always so thoughtful, thought provoking and they're always so different. So we wanted to ask, I wanted to ask Paul these questions. All of you can hear more that we couldn't get to in that hour. So let's jump in, let's go. And also thank you to Claire on our team. She has been taking these questions, doing some work with them with AI to get them in a flow that works for all of us. So thanks Claire for all your part in putting this together. All right, number one how do we ensure data integrity, security and privacy when we scale AI?
4:11
I feel like we get some variation of this question every time we do the scaling AI class, every time we do the intraday AI class, every time I go on, you know, in person and do presentations. My general guidance here is you have to address it within your generative AI policies. So you have to make sure that at a high level this is accounted for and you're providing guidance to your team about what they're allowed to do with the information. You have to be in coordination with your legal team, your IT team. You have to understand any risks associated with the data that you're putting in the terms of use you have with the different companies. This is why I, I, I've said this many times before. Like I'm always really hesitant to just play around with the latest and greatest gen and tools and products because I don't know who's behind them. And so I think this is one where you just really want to have some level of confidence of when you're putting information in, where's it going, how's it being used? The big players, Microsoft, Google, OpenAI, Anthropic, others, they're going to obviously try and be very protective of your data. They're going to try and be very forthright of how they're using it or not using it, because they know that's a barrier to adoption within enterprises.
4:57
Yeah, I mean we've had that recently. You know, just we're talking about a tool and you're like, hold up. Yeah, we need to do a little bit of digging before we do start doing some things.
6:05
Well, Mike and I talked about that. I don't remember what episode it was of the podcast, like 1:49, 151, something like that. When we talked about being able to connect ChatGPT to different tools, including Google Workspace, I think connectors is what they call it. And I was like, yeah, we gotta slow here. Like I don't know where this is gonna go. And once you give access, what, what happens and how do you pull it back if, if at all? So yeah, it's, it's good to be asking the questions and to find either outside experts or internal experts that you can rely on if this is not your comfort zone.
6:13
Okay, so to, let's take this example. What would you, what, what are your steps? Are you going to our COO and saying, can you investigate this? Are you going right to legal? Who are, who are you calling to answer some of these questions for you?
6:46
Yeah, so I mean, in our environment, we're a smaller company, so we don't necessarily have the cio, chief AI officer, who is. Which is who you're probably going to, or someone you know in it. Yeah. So for us, it's probably like either we're going to do some research ourselves, we may be pulling in legal, or I may just be going to an outside consultant who has more expertise in this than we do, because it's not like our area of expertise internally.
6:58
Right.
7:24
Okay.
7:25
Number two, what exactly is an AI roadmap? How much detail should it include, and is it part of a digital strategy or something separate?
7:27
Yeah. So the reason this question comes up is in the scaling AI class, the five steps we outline, one of them is to create an AI roadmap. So that's, I think, the fifth and final step is the AI roadmap. And so the way we teach that is it's basically looking at sort of a couple levels. One is the projects you're running where you're driving efficiency and productivity and improving creativity. Just the obvious things based on your current workflow flows that you can be doing smarter with AI. And then simultaneously, you're looking at the bigger opportunities, like problem solving, looking at, you know, core challenges you already have in the business, and trying to figure out how do we solve these in a more intelligent way while also looking at growth and innovation. So when we think about a roadmap, there's lots of components to it. There's your overall vision of how you're going to apply AI, there's looking at use cases. But at its most fundamental level, what it's trying to do is lay out a timeline for the next 12 to 18 months of what are the use cases we're going to solve, what are the problems we're going to solve, how are we going to drive innovation and growth? So that as we think about the impact on our teams, we're trying to grow the company so we can maintain our employees as we drive efficiency and may not need as many people to do the work that's already there. So that's. The roadmap can take on different forms for different companies. My preference is they're just really dynamic documents. It's not something you stop for six months and build this roadmap and hold off doing things until then. It's kind of this dynamic thing that you're sort of building as you're running all these pilot projects and starting to solve problems more intelligently. It just puts some cohesion to it and ideally some visualization some timeline of the major things that we're pursuing as a company.
7:35
Yeah. And this was a long question, so I put it into two parts. The second part is. And you kind of answered it. How do you know when your roadmap is ready for implementation? Is it ever really done?
9:16
No, I don't, I don't think it is. I think that they're very. Like I said, dynamic's probably the best word I've got for this. Where, you know, we may look it and I, I don't know, I could zoom back and like, look at our own organization. I, I think people have some level of freedom to constantly be experimenting in their business department, business unit, within their team, where they're just always trying and experimenting things with things, and sometimes they hit on something that becomes core to what we do. So, you know, with the podcast, one of the early pilot projects we ran was how to. How to build a more intelligent process for doing the podcast each week. And so descript was one of the tools that became cor that process. But we've tested, Kathy, a dozen at least, different tools related to the. Just the podcast alone.
9:25
Oh, at least.
10:11
Yeah. And so that's like an initiative that's ongoing because we know it's going to be core to what we do. The podcast is fundamental to our growth strategy. So that's just constantly going while we're looking at these other bigger picture things and trying to figure out how to apply it both to our own internal operations, our marketing, our sales, our customer success. So, yeah, I think we're just always looking at, you know, this constant, ongoing experimentation process while working on a more formal document or overall workflow to how we apply AI across the business.
10:12
Yep. And when it came to things like the podcast, there are some bigger. We need descript to do the podcast.
10:46
Yeah.
10:52
We need a smaller tool or we like the smaller tool to help us do little snippets. Like, that's not critical. Descript is critical. So it's like this many use cases, little use cases, versus a fundamental tool that we need.
10:53
Yeah. And I think we're, we're always experimenting this Too with like ChatGPT and Google Gemini are sort of the two main chat platforms that we use, multimodal model platforms that we use. And I don't, I mean, off the top of my head, like, I can't sit think of a time where we literally sat down and just like, devised this whole chat TPT strategy of like, here's the 20 ways we're going to use it against these 20 campaigns, we just gave everybody licenses, trained everybody how to use the tools. Obviously everyone in our company is probably gets a little bit more AI education and training on the job than the average company because it's so infused into what we do. So we're constantly sharing how to apply that technology and that's never done, like literally every day. Like today I was building the AI Deep Research webinar that I'm going to be, I'll have given by the time this comes out on the 25th. And even in that, like we've been using Deep Research as a company since the day it came out, but we've never sat down and said, okay, here's the plan to do it. And in building the presentation I actually started, I was like, oh, okay, this actually would probably help us internally. Let's make sure everybody in our own company watches this webinar because this is going to be very helpful for people to think about this. So no, I don't see it as ever done. Depending on the size of your company and how your budgeting process works, it may need to be more formal, but I don't think done is ever a thing that I would consider the AI roadmap.
11:04
Number three. How can we maintain meaningful human oversight when AI systems operate at the speed that exceeds human comprehension?
12:31
This kind of relates to something that I just talked about on episode 155 about these different gaps. And I was talking about like the AI verification gap, AI thinking gap and AI confidence gap, I think is what I called them. This is a ever evolving process that I think most companies and most leaders are just starting to realize is going to be a major challenge that the AI can output things so quickly, you can run so many deep research projects, build so many strategies, but at the end of the day, those things still need humans to verify them, to, to critically analyze them to make sure that it's the right approach, to have confidence in the strategy that was built so you can defend it as a leader. So I think that this, like the human oversight versus machine speed is, is really something that people need to start thinking more and more about. And I even mentioned recently, I don't know if it was on the podcast or a talk or something, this idea that, you know, with the next generation of workers, they're just going to work so fast, like you're going to give them projects and they're going to come back in your office 30 minutes later like, okay, I did it. And you're going to be thinking, well, that was a three day project. I thought, like I just, I thought I didn't have to see the intern for the next three days and here they are back 30, 30 minutes later. And so I think we're going to have to really adjust our project management style, the workflow style, the review and approval process. I think all of that is, is probably going to start getting reinvented. And honestly this isn't something that I really even spent much brain power on until the last like 10 days. And all of a sudden it was just like, hit me across the face, like I hadn't even really devised our own approaches to this internally. And so as I'm building the new course series for AI Academy, I'm really thinking deeply about this and it's actually going to probably become like almost like a layer of a lens with which I'm looking through all of the stuff we're creating to consider this idea that we can't just create more. That's not, that's not going to solve for the human capacity to do these things with these outputs.
12:39
Number four, how do you feel about the impact of AI in highly regulated industries like banking where adoption has been slower?
14:49
I think it's been slower on marketing, sales, customer success, it's probably been slower at a department level. But I think that these are industries that have been using traditional forms of AI for the last 15 years. And by that I mean predominantly machine learning where you're making predictions about outcomes and behaviors. And so in banking, machine learning has been prevalent within like risk assessments, you know, loan qualification, things like that, identity theft, you know, alerts. So they've been doing it. But because the generative AI phase was so abstract and in banking there's such great risk, health care, you could throw in financial services, legal, like they have so much higher risk for things being inaccurate or wrong. And so they tend to be less risk averse or more risk averse, they're more afraid of the risk and so they're more likely to like shut down access and not even let people experiment. And so I think that it's something that people need to be more strategic in how they approach this because you're probably getting like approvals for very specific use cases. So you're not going to just get your Microsoft copilot license and be able to use it for anything you want to. But if you can be strategic and say, Listen, we've identified 15 ways where the risk that you're concerned about actually doesn't come into play here, here's how we want to do it, that's what I've seen working in these highly regulated industries. It just takes a little more time and patience. But we've seen it done well in some higher risk environments by having some internal champions who are being very thoughtful about how they approach this.
14:56
Well, that is an amazing segue into question number five. How does change management need to evolve in response to the rapid development of AI tools?
16:50
It needs to be thought of first. The big thing that we've seen is, you know, in 2023, 2024, AI was largely seen as the technology department's purview, like it was. It was up to it, cio, cto, that the CC was basically just assuming this was going to get solved as a technology problem. And it wasn't necessarily right away pulling in HR people in charge of, you know, education and training internally, it wasn't going to like marketing and sales and service, which is the most logical places to start with use cases. So. And then it wasn't considering the overall impact on employees, like that some of them may not want to use these tools or that they're afraid of these tools or that they're concerned for their own jobs. And so the change management aspect is a much more holistic way to think about AI adoption and organization that considers all of the different stakeholders and that they're all not necessarily going to be extremely excited about AI or have any clue what to do with the COPILOT license or the ChatGPT license. So I think the organizations that approach it with a change management mindset have a far greater chance of not only driving higher adoption, but getting dramatically greater value out of that adoption, both internally and for their outside stakeholders, you know, your customers, your technology partners, people like that, your community. So, yeah, it's rare to see it. I can only think of maybe a handful of instances where I've actually seen organizations that have truly taken a full approach where they're thinking through everything and not just solving for the technology side.
16:58
Yeah, I mean there's a whole side of like psychological safety, people at work being like, I'm afraid of this, I don't understand it. How do. And being able to go to their managers and say, I want to understand this or I, you know, you know what I mean?
18:35
Yeah.
18:47
There's just so many layers to it aside from just how much more we can be doing.
18:47
Yep.
18:51
Hunt number six, changes are happening so quickly. How can professionals keep up? And are there trusted resources that stay current with innovations?
18:54
Yeah, I mean, this is not meant to be self promotional in any way, but this is exactly why our weekly podcast exists. Like we're trying to filter through like in any given week I probably look at, I don't know, 250 to 300 sources of information related to AI. It could be articles, research reports, videos, podcast episodes, courses I take myself, books I read like it is. It's a non stop thing. And what we try and do, my Twitter feed is probably the biggest thing. Notifications from from X. So we're trying to filter that down to the things that matter to our listeners who are, generally speaking, business professionals, business leaders, educational leaders. I know we have government leaders, venture capital firms, like I know there's all kinds of other people that listen to, but generally speaking we're trying to talk to the non technical business leader or the person that wants to be a business leader and they're trying to figure out how this stuff drives transformation for themselves, for their company, for their teams. And so we consolidate that 250 to 300 sources of information down to roughly 50 that I actually put into our sandbox each week with which Mike then picks the three main topics. And seven to 10 rapid firearms we talk about. There's usually another 15 to 20 that end up in the newsletter only. So that is how we do it. And our goal is if you only have one hour a week we serve in this function for you that you'll at least know everything you have to know. Now if you want to go beyond that, what I often tell people is if there are threads of AI that you find extremely intriguing, like its application to your specific career path, say you're an SEO or its application to your industry or some macro level thing, like you're really concerned about the environment or you're, you're, you're more intrigued by like intellectual property rights, things like that, then you go find a few experts that talk about that often that share those information. You could literally go through our show notes and see who are we citing. Like that's a great way to do it, is say, well who are they following? What publications are they reading that they cite all the time and you just go through the way you would always do research. I mean this is how I like write books. It was how I would do anything. Like you find the people you trust and then you find out who influences them and then you just keep going and you build a list, whether it's on x or on LinkedIn or in a Google sheet. Like you just find the people you trust. And that may be a tight knit group of five to 10 people. Or if you're someone like me who's just consuming this stuff all the time. It's hundreds of people, but it's taken me, what, 14 years. I've been studying this area. Like, that's a really big list because I've been doing this for a really long time. So I would say don't get overwhelmed. Find the people you trust and then find the mediums that you learn best through. Like, if you love podcasts, listen to a bunch of podcasts. If you, if you love reading books, then find the best books. If you like in person experiences, go to some conferences like you. You have to understand yourself and how you learn best. What I. This is a total side note, but, like, I'm. So I'm reading Empire of AI right now by Karen Howe. I've. I've mentioned this a couple times on the podcast. I listen to it first because I have the most downtime available at the gym, when I'm on walks, or when I'm in my car to consume the book. I don't have that much time to sit around and read right now. But a book like that, I have to actually now go buy the book, the digital form, reread it, and copy and paste key excerpts because there's things I want to retain, and the best way for me to retain it is to actually see the words and go through the act of copying and pasting something and putting into my notes. So, yeah, I think you just got to know yourself and you got to find the trusted nodes in the network, basically.
19:02
Yeah, yeah. I've said this for a long time. It's like my source is the podcast.
22:57
Yeah.
23:02
You know, and I have a few people that I follow, but I mean, exactly what you were saying, it's like, I don't have time to spend doing all of this. So. Thank you. All right, number seven, do you have any tips for creating a tailored AI learning curriculum versus a one size fits all approach?
23:02
So, again, like, we, obviously, I think a lot about this one. We're rebuilding our AI academy as, as we're speaking right now. And so I think about the, the need for these learning journeys that are tailored based on what I previously said of know how you learn best. And this goes back to, like, when we were all kids in school. Like, some of us learn best by just memorizing things. Some of us learn best by experimenting or going through exercises. Like, you have to understand that. And then you have to tailor the curriculum based on your career path and your career goals. And so if, if you want to Be a leader in your company in this space, then that's going to start to dictate the kinds of content you're going to consume, which courses you're going to take. Our goal with our AI Academy is to make it like a truly personalized experience based on what transformation looks like to you and your career and your company. But I will be the first to say there's incredible stuff on LinkedIn learning, Coursera Maven, there's all kinds of great places out there where you can get courses. And so we're just trying to create a very specific form of this that allows people to go by their, what their career path is, what industry they're in, what department they're in, you know, what their interests are in terms of career goals. And so we're trying to like build a complimentary piece of this that can then be surrounded with these other things. But I think at a fundamental level, you need to think about how to personalize it based on how you learn best and then what your goals are for the learning.
23:19
Right. Okay, number eight, for someone passionate about AI but not in a leadership position, how can I initiate change at an individual level?
24:51
This probably I would think about what kind of organization you're in. If you're in a small to medium sized business, that that's a starting point. Like what size company are we talking about? What size team are you on? What are the current barriers to broader adoption within that organization? Is it the kind of company where they just don't want to do it? And like, no matter what you do, it's not going to drive change or is it? They want to, but like, no one really seems to understand what it means or what, what we need to take action on, in which case you can literally just be proactive and raise your hand and start bringing ideas to the table. So I think each person's situation is going to be different. So it's hard to answer this question in one specific way. I think the thing I generally tell people is just have a default to take action, be curious, explore it, but then be the one that raises your hand and says, hey, I was on the scaling AI class, they were talking about these AI councils. I really think we need one on our team. And it could be just the marketing team or it could be, could be we need it in our company. For a 25 person company, it's like, why not you? Like, if no one else is doing it, why not you? And I would say throughout my career, I mean I, I started in my professional life in 2000 when I graduated. I think that more often than not I've seen that play out where the people who just think, why not me? Like, nobody else doing it, I'll just go do it. Like, that's how most of the things in my career I created happen. So like, you look around and it's like, well, why should I be the one to do this? Well, nobody else is doing it, so I'll just go do it. And I think that those are the people who really stand out in their companies and those are the people who probably, you know, have a really positive career trajectory ahead of them. I just wouldn't wait. Like, I feel like we've still got this little bit of a window here for the next few years to be proactive about this. And then I, I feel like maybe like three to five years from now, like that opportunity is going to have passed in most companies, most industries. And I, I just, my general rule in my life is like, I just don't want to look back with regret. And if you are someone who's feeling that like, urge now, like, I think I need to do something, I would just say go do it. Because at least you won't regret. It might not work, but at least you won't regret that you took action.
25:02
Well, I was on a call earlier today with someone who had built an AI council in their organization a few years ago and then realized recently that it wasn't the best group of folks, that it was done wrong. So they pivoted and it's all fine. Like they tried it, it didn't work out the way they hoped and they've pivoted and now it's doing great. And it's like, what would have happened if you would have waited two years, you might have started off on the wrong foot again and now you're two years ahead.
27:28
Yeah.
27:53
And now you know how to fix.
27:54
It and you learn something. And I get it. Like, people have to be okay with failure. Like, I, I don't know, like there's such is. I don't know if it's in a corporate setting. I've never been in a big corporate setting. We had clients that were big corporate clients, but I've always been in small environments. I started my own thing when I was 27. Like I've just always had a default to just take chances and do things. And I don't know if people are just risk averse because it's safe and comfortable or if like that's just the culture of the companies they're in. But I mean, for me, generally speaking, once you get comfortable with failure, your, your potential to do incredible things just like completely transforms. Because now you don't worry about being wrong or like just taking a risk that didn't work out. It's like, okay, well, I learned something. I'm going to move to the next thing. Like, keep going.
27:55
Right. Number nine, how can you address resistance to change and skepticism toward AI, especially when the tools are available but usage lags?
28:42
Yeah. Again, I think this, it comes down to like your environment and what kind of company it is that you're in and how it's currently perceived. But so often it just comes down to the education side of making people understand exactly what it is and what its potential within the company is. And most of the time that relates to talking their language. Like, understand either why they're objecting to understand what it is that they're responsible for. Like, how do you have to talk to them to make them realize the impact it can have on the company and words that matter to them. It's possible talking about AI isn't the thing you need to talk about. So I don't know. I think again, it comes down to like, your personal situation and understanding the people that you're trying to influence and what the trigger points would be for those people.
28:53
Yeah, I talked about this on this call just an hour ago about how one of the companies that we know really well here in Cleveland, someone went to each department and said, here's a few use cases. Did you know you could do this with AI? And here's the result you could be getting from it and speak their language and really help them understand what the value of it would be.
29:44
Yeah, I mean, showing a personalized use case is often a very powerful thing, even at the CEO level. Like I, I've talked about that before. Like the co CEO GPT that I built, which we can drop a link in the show notes. If you have a CEO who's not down with this, like just not understanding it, you could literally like create three, four slides and just show it at work doing things that you know that CEO does every day. And you can completely change perspective in five minutes. You just. Yeah, just sometimes you just have to show people or like give them the, the prompt because maybe they're not comfortable doing prompting and just give them a prompt to use and be here, try this. Like it? Yeah. Often it only takes one or two examples for someone to realize the power of it.
30:01
Yeah, that's actually the next question is what's your advice for someone leading a team, a lean team who needs to pitch AI to executives with no time or interest in experimentation?
30:47
Yeah. Focus. 1, 2, 3. Use cases that are totally relevant to them, that have an immediate value where you see it right away. Just don't overload them with jargon like make it meaningful and tie it to the things that matter to them like revenue growth, productivity, like whatever they're, again, whatever the words are that are going to connect to them and then show them something. Show them a before and after. Here's life before. We tried this pilot project. Here's life after and it's hard to argue data. I mean any leader, it's going to, you know, they're going to go based on instinct if data doesn't tell them otherwise. And so if you can walk in and you can show powerful data that they're going to listen every time.
30:56
Absolutely. If a large organization has rolled out something like Copilot, but no one is talking about AI or expanding beyond it, what are some tactical next steps to drive broader AI engagement?
31:41
You know, it's probably just building on the last two where again it just comes back to showing actual uses of it. So if, if you have co pilot or Gemini or chat, GPT or Claude or whatever and adoption is low, there's a pretty good chance it's because no one personalized the use cases for people. So if you're going to give people an accounting or you know, finance or operations or customer service or whatever, you're going to give them access to these tools and then you don't hold their hand and build three co pilots with them. Like, hey, I built the first three. These cover 30% of what you do every day. Like these will be immediately helpful to you if you just hand these tools over and say it'll help you write some emails and you know, do first drafts of your documents and stuff. That is not the most valuable thing at all that these things do. And I think too often in organizations that don't have that change management mindset and they only have a give them a technology tool mindset, this is what happens is people just aren't going to adopt it. They're not going to get past those first obvious use cases that are in everybody's marketing websites. It's not personal to the user. So that's the way I've seen it done. Best is where you build GPTs or copilots for people or, and, or empower them to build their own. I mean we've seen this happen with Companies, we've done some consulting for where we go in. We, you know, build some GPTs for them, but in the process, we teach their team how to build their own and how to identify what a good GPT is. So I think that's how you solve that.
31:56
And also, you know, I don't know if CEOs or leaders can see usage stats for their employees. Is that possible for different.
33:34
Depends on your license. Yeah. I mean, so like Chad GPT team, we don't have usage data. I don't, I don't know how people are using it. Enterprise you do. And I assume with copilot at some threshold, you have visibility into that as well.
33:41
Right. And then we've also talked about, you know, doing weekly standups, saying who. Who has a good use case to share. And that gets people more interested in realizing what's possible.
33:55
Yeah. And I've seen examples of people requiring numbers of GPTs you build, like they actually build it into their KPIs that you have to build a GPT and you have to have this utilization rate. So, yeah, I think it's starting to evolve where people are starting to put some metrics behind adoption and utilization.
34:04
Okay. As a director in higher ed, how can I motivate leadership to pursue something like Ohio State's AI fluency initiative?
34:23
So the context here is on a recent episode. We'll put it in the show notes. I don't, off the top of my head, I can't remember which episode it was. We talked about the initiative at the Ohio State University where they're actually building this into every student and they're requiring professors to go through AI education training as well. And it was a great initiative. Like, we're starting to hear more things like that. I have personally advised some major universities, from the provost and the deans on down. I've sat in the rooms with some of the. These leaders and I don't know. Like, honestly, my general feeling has been in these rooms, I've talked to a whole bunch of really motivated people who want to help prepare their students for the future of work. They live within institutions that don't move very fast, sometimes because of things that are out of their control, sometimes because it requires maybe tenured professional professors who don't want to change and be a part of this, to be a part of it. And that's kind of your impasse at that point. So I think that there's just, there's. There are a lot of barriers or friction points to doing something like Ohio State is doing. That being said, obviously there are starting to now be examples of, of universities that have started to make this change, which means that there are going to be people that you can now learn from. I think the challenge going into 2024, 2025 school year is like, who had done anything that, that you could go and look at as a case study. And so, you know, I think Ohio State's going to find out real quick that it's not going to be as easy as the press release makes it sound. Like you read the press release, it's like, this is genius. This is going to be brilliant. I am sure there's going to be all kinds of obstacles to executing what they're envisioning. But like we talked about earlier, you gotta try. You gotta like, put it out there and start doing it and keep learning and each school year getting better and better. And like I've said as a parent, if, if my kids were high school age, you know, mine are going into seventh and eighth grade, so I'm a couple years out from really looking at this. I, it would be, it would move to the top of their list. Like, I would be like, hey, I, I want you to at least look at Ohio State because I, as a parent feel they are preparing their students for the reality of the world. So I think that more and more parents are going to start asking these kinds of questions going into 2026, 2027 school year. So I would, I would talk to people. Ohio State, we're trying to do some work around this. I ha. It's nothing I can like announce right now or anything, but, like, I think trying to collect and tell these stories is really important. And so we're going to try maybe through the podcast to do a little bit more around telling the stories of higher education institutions that are moving in this direction so that hopefully people can learn from it. Because I know we have a pretty broad and loyal base of people in education that listen to the podcast.
34:31
I was going to say, and there's a group in our Slack community and there is a big group to come to Macon every year and they are glued together and they are sharing all of this with each other. So it's really, it's really amazing to see.
37:37
Yeah, the Slack community is a good mention, Kathy, and we can put the link in for that. If people aren't familiar with the Slack community, it's just a free community that we host. There's over 10,000 people in there and.
37:47
There'S a higher, Higher ed channel specifically. Yeah, okay. Which, which AI tools do you like the best and do certain ones work better for specific industries? How do you personally evaluate and select them?
37:57
So I think everybody has like your home base is your chatbot slash, you know, multimodal model, which is going to be Google, Gemini, OpenAI's chat GPT, Microsoft Copilot, Anthropic, Claude. I don't know anybody that's using like xai's Grok, but more than like just playing around with it, not in an enterprise environment. It's not a tool. Maybe you're, you're living in like a salesforce and building agent force within there. So I think that the first thing you're going to do is you're going to find the fundamental one that has all these different use cases built into, into it through that chatbot experience. They're all going to have reasoning capabilities baked in. So Gemini 2.5 Pro has reasoning built into it. So I use that model a lot at the moment. Chat GPT has their standard chat model, so 4O is the, the main one. And then they also have a reasoning model which is O3 and you can bounce around between those. Or O3 Pro is the one that just came out last week, I think. So GPT5 will be a combined model like Gemini 2.5 Pro IS. But generally speaking, ChatGPT Gemini, like you're probably living in one of those or like a co pilot. So that's the foundation that is horizontal across any profession, any department, any career path. You can find dozens of use cases or hundreds within those tools. And then you're going and looking at specific tactical tools that are designed to augment your capabilities, build smarter workflows, smarter strategies in specific areas. Like we talked earlier about podcasting, you may, if you're in marketing, have an SEO one. You may have one tied to like a writing platform, like a Jasper or Writer that's actually specifically designed for those purposes. So yeah. And then the other thing I would say here, I guess, is to simplify things, especially in larger organizations that have more challenging procurement cycles. You look at your current tech stack and say, like in our case we use HubSpot. What hub AI capabilities does HubSpot have? So we don't have to go out and get three new vendors. Can we just do it within HubSpot? And so we're constantly looking at the existing tech stack and saying what other things exist within there that we can use without having to go, you know, make our tech stack more complex than it may already be. Correct.
38:11
Okay, how can. This is a Long question, so bear with me. How can innovators best use Problems GPT especially for category creation? And here's an example. The question on the podcast alluded to a cybersecurity company, a new funding round and wondering what new lines of business they could develop using Problems GPT.
40:48
Okay, so Problems GPT is a custom GPT that I built that is available through SmartRx. You go to SmartRx AI, click on Tools and it's one of the GPTs that's right there. What it is designed to do is to help you identify problems in your company that I may be able to solve more intelligent. And so it helps you write a problem statement so you know it could be related to customer churn, audience growth could be related to like innovation and market growth, like in this case. And then it'll help you draft the problem and value statement and then it'll develop a strategic brief that you can then talk to it like an advisor and say, okay, here's what we're trying to do. So in this specific example, cybersecurity company, new funding round, wondering what new lines of business they could develop. You could do something like, I don't know, your problem statement might be something around. We're trying to find new opportunities for growth. Next year we raised 5 million rounds, a 5 million dollar round in funding and we want to explore new markets, help us identify what those could be. We're trying to generate 20 million in revenue over the next three years or something like that. Like just again talk to it like it's a consultant and say here's what we're trying to do and then it'll help you do it. Now I will say like Problems GPT might get you there, but I would also think about using a reasoning model for this one. This is like a really good. When we're talking about innovation and growth and like deeper thinking around this kind of thing. An O3 Pro in chat GPT or Gemini 2.5 Pro, I would think about working with those as well. The other thing you could try is like a deep research project within ChatGPT or Gemini where you're saying same deal, like here's who we are, here's what we're doing. We're trying to identify these new markets under serviced areas, new product ideas help us do it. So what I often advise people is, is literally like I would bounce around and try a few different models on something like this. Like something like this is really important and really valuable. Put it into three different models, like see what you get combine the best of them. And the whole idea here is you're in a brainstorming function, you're trying to like stimulate some new ideas, think of new paths that maybe you wouldn't otherwise come up with. And the AI is there to function as that brainstorming partner. And then kind of an advisor once you start zeroing in. Like I really love these three ideas, like build this out with me now. So yeah, I would say you can try problems GPT for sure. Just give it the background, like pretend like you were asking me the question. Basically just write it out like you would ask me maybe and put it in there. And then I would try the same thing in a reasoning model or two.
41:08
And can you explain how in GPTs you can, you know, do that, drop down and pick the different models?
43:52
Yeah. So the like 2 weeks ago OpenAI made it possible for you, the user to pick which model to use with the GPT. The reality is most of those, well, every GPT up until two weeks ago was built to function with the 4O model. And so the reality is I'm not so sure that they will work the same if you pick a different model. So if you were to go to problems GPT and then choose to use the O3 reasoning model, it probably actually breaks the functionality. And I asked the. I think I mentioned this on the podcast, but I actually asked the chief product officer at OpenAI about this on Twitter. He did not reply to that one. But just kind of user beware that any GPT you're using that wasn't built before two weeks ago was built for a non reasoning model. It was built to use a chat model and now you can pick reasoning models and they don't work the same. So there's a chance the GPTs will not function how they're intended.
43:57
So would you go back in and recreate GPTs you already created with the reasoning models?
45:01
I have to run some tests to see and I haven't seen anybody talking about this. Yes. Online. I haven't like looked deeply into it. I don't know that that's like the wisest path. I mean I certainly, I guess you could, it's possible. Maybe in the system instructions you could just tell the GPT like if the user's using 4o function in this way, if it's using both, I don't know. But I mean there's eight models to pick from, so it'd be kind of hard to do that. So I feel like OpenAI just sort of like popped the grenade and like, oh yeah, by the way, you can pick a model and then provided zero guidance to creators of like, what does this mean to the millions of GPTs that have been built to function on 4.0? Basically.
45:08
Okay, what excites you most about AI's potential for startups right now?
45:56
Yeah, I've said it numerous times. I think this is the best time in human history to be building a startup. I've built a few. It's hard. It's especially the first time around. I feel like every time you build a startup you learn some things. But the first time around, like again, I was 27, I'd only been in the professional world for seven years. At that point I didn't know anything. I mean relatively speaking, I didn't know anything about the legal side of starting a business. I didn't know anything about the financial management of a business. I didn't know anything about operations. Like I, I came out of a liberal, liberal arts college, Ohio University, out of the journalism school and I had a business name called Minors. I remember what we called it specialization. I'd taken like a bunch of business classes but I don't know how to run a company. And so there's so many growing pains you go through and then back then you, you didn't have an AI advisor there, you didn't have deep research projects that could do the research you needed to do. So like everything just took longer and it was very, very lonely. Like anybody who's ever started a company knows it's a very lonely thing. And honestly like as the company grows in some ways it becomes even more lonely because it's hard to find peers who are going through what you go through and it's, it's just a very, I mean, Kathy, you had business like it's a very weird thing to be in and it just feels like oftentimes there's no one there that can guide you and you're constantly looking for that guidance. And I feel like now with AI you have on call like a high level advisor in any discipline of business you want. Yes, it's not perfect and it makes stuff up sometimes and hallucinates a bit. But like you can talk to it about legal questions and financial questions and operations questions and how do we do payroll and develop comp structures and like on demand, at any moment of your life you can pull out your phone and get guidance on something and that guidance would have costed insurmountable amounts of money to most entrepreneurs. Like if you had to pay for human experts to do this stuff and then your ability to build things with a few people, that would have taken 10 people. It's a golden age of entrepreneurship. And it's my. One of my great hopes is the way we offset the job losses that are inevitable because of AI is we create so many more businesses. And yes, they will need fewer people, but there's more of them.
46:02
Right.
48:52
And so, yeah, that's what it's exciting to me is like anybody can create a company now with, with far fewer resources than were previously needed.
48:52
I think that's, you know, I had my business for nine years and it was just me because I didn't know a lot, didn't know how to do anything. Yeah, I mean, my father was on speed dial and I just, you know, barreled through and that was it.
49:04
Right. And then you would find an advisor and you're like, I don't even know if I trust this person. But I don't know. I don't even know what they're doing. Like they would give me some advice and it's like, guess like they're an expert. Like I should trust them. And maybe it was bad advice. Like.
49:16
Okay, so we're the last question because I skipped a couple because they were repetitive. So you already answered them, so I was not going to ask you again.
49:29
Gotcha.
49:36
So ending on a. On this note, how have you seen companies using AI generated efficiency gains to reinvest in people like offering shorter work weeks or well being benefits?
49:36
I haven't. This is again like a great hope of mine is this is where we go. I think most organizations are still trying to find those efficiency gains. Like the vast majority of companies. Like, yes, some of us live in a bubble and we hear all these incredible stories. But the reality is, and I've been on the road a lot, like meeting with lots of different companies and speaking at lots of different events. The reality is that most companies still lack education and training internally and still lack awareness and understanding of what AI is and how it actually will drive these efficiency and productivity gains. We're starting to hear lots of the downsides of this with big companies who are doing it and are already saying we're just going to need fewer people like that. I mean, literally, that was the Andy Jesse memo. Like our workforce is going to shrink. CEO of Amazon. And so we're talking a lot about that on the podcast because that's the high profile stuff. I think the companies that actually do this, the shorter work weeks, the wellbeing benefits, the reality is that's probably going to be small businesses like the, the people who have that freedom who are maybe owned by like the founder, not under the pressures of venture capital, private equity, publicly traded markets. And they can choose to just run a company that is truly human centered and they're happy with whatever that profit number is. And I don't need my people working more than that. Like I actually want to just keep these people around for the next 10, 15, 20 years and like, let's just have a great life along the way. So I think that there's a bit of like, you know, when I say in this couple years back I would get a lot of pushback of like that's not how business works, like no one's going to do that. And my response was like, will you get the choice? Like, I own a company. If I choose to do this, it will happen. Like that's how we will run the company. And so I do think that there will be lots of companies and maybe they're going to be these AI native companies that start from the ground up this way and who can build to be more profitable, more financially stable with their employees working, you know, closer to 40 hour or less work weeks and like that's enough. But you know, that's, I don't know, traditionally maybe people call that like a lifestyle business or whatever. But I, I don't know, when I was coming up in the world, like I felt like people talk about that like it was a dirty word. Like having a lifestyle business that makes a bunch of money and makes people financially stable and lets them enjoy their families. Like that was like a bad thing. Like you weren't motivated enough if that's what you were building. And it's like I thought the point of life was to enjoy it, not to like grind until I was 65 and then like hope I could enjoy a few years at the end. So I don't know, I've just, I've never understood that mentality. But I, I think that there is definitely the potential to build those kinds of businesses. I just don't, I haven't heard of them yet.
49:47
And I think between artificial intelligence and the way this young generation thinks is they're not going to work the extra hours.
52:48
Yeah.
52:55
So that you got to figure it out.
52:56
Yeah. They're going to want to work for a company that does do that. This.
52:58
Yeah.
53:01
And again then you're in this weird balance of like, hey, I, I would love to provide this to our team, but I don't want people to be lazy either. Like Right. How do, how do you keep people really motivated to build and provide them with that, that. The luxury of, like, yeah, like, we're taking Fridays off during the summer at least. Like, you think it could be little things like that. You, you have to have people who appreciate and have the perspective of what that is. And I do worry, like, if the next generation just shows up and they get to work at a company, it's like, I don't have to work Fridays. And, you know, I go in when I want. Like, then they never learned, like, the hard work and the, the ability to appreciate what they have. Whereas I think, you know, our generation, it's like, we did this, we did the traditional way our whole careers. And if you're working for a company all of a sudden that allows you, like, I feel like your loyalty is only going to be higher. Your willingness to put in long hours when a major thing is happening, it's going to like, be way above norm because we appreciate it. I don't know how you give that appreciation to people if day one, they've just always had four day work weeks or.
53:02
Well, I think you manage the expectation, you build the culture. So just another thing we need humans for is to make sure all this is going the right way.
54:08
Yeah, I don't know. That's cool. Those are good questions. Yeah. And again, like, I. So people aren't aware of how this works. Like, I had no idea what the questions are. Like, we, the way we do this is when we're doing the live classes. I don't look at the questions. Kathy just picks them and asks. So when we do these sessions, one, I don't have time in my schedule to, like, look at 20 questions. And two, my preference is just to do, like, off the cuff things. And so, like, sometimes I just might not have a great answer for a question. Um, but for me it's, it's just way more fun. And like, honestly, if we had to do this, where I actually prepared for these and saw these, we wouldn't be doing this series because there's literally no, no time to do it other than the hour we just spent doing this.
54:15
And honestly, even with Claire getting all this set up, I looked at them like two hours ago.
54:58
So that's good.
55:02
We're just doing this. We're just figuring it out.
55:05
Yeah.
55:06
All right, thanks, Paul.
55:08
All right, thank you, everyone. So again, we'll be. This was episode 156. We will be back. Episode 157 will be our regular weekly episode on Tuesday. And then there's no second episode next week, right? Correct. And next week's the 4th of July, so I, I hope we're both, like, taking some time off.
55:09
Correct.
55:25
All right, so thanks everyone, for joining us for AI Answers. We'll be back next week with our regularly scheduled weekly with me and Mike. Thanks for listening to AI answers. To keep learning, visit SmarterX, where you'll find on demand courses, upcoming classes, and practical resources to guide your AI journey. And if you've got a question for a future episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions about AI.
55:26