#187: AI Answers - Overcoming AI Stigma, Vibe Coding, Redefining Productivity, Building AI-Native Companies, and Finding Trusted Sources
This AI Answers episode addresses 14 questions from business leaders about AI adoption challenges, including overcoming organizational stigma, redefining productivity metrics, and the importance of executive leadership in AI transformation. The hosts emphasize that successful AI adoption requires change management and education, not just technology purchases.
- Organizations are over-investing in AI technology licenses while under-investing in training and change management, leading to low adoption rates
- AI adoption should be led by business units closest to customers and decision-making, not IT departments, to drive innovation rather than just risk management
- Executive leadership must model AI behavior and create psychological safety for employees to experiment with AI tools without fear of judgment
- Only 10% of knowledge workers use AI daily, indicating massive opportunity for early adopters to gain competitive advantage
- The job market disruption from AI will accelerate in 2025, making AI literacy essential for career survival and growth
"I thought that by 2020, AI was going to be everywhere. Everyone would have already adopted it and we wouldn't even need separate AI education and events. We would just have marketing events and business events. I was very wrong on that."
"They over invested in buying gen AI platforms for all their employees before they taught them how to use them or gave them use cases for them."
"If the people that are deciding hiring strategies, budget allocation, aren't the people empowered to infuse AI, then what are we even doing in business at this point?"
"You have to level up your understanding of the technology, what it's capable of... You can go get educated and I really fast."
"There's a massive opportunity to be a power user and still be an early adopter and innovator. But just focus on using one of those AI system platforms to the fullest extent and you will create enormous value for yourself and your company."
I thought that by 2020, AI was going to be everywhere. Everyone would have already adopted it and we wouldn't even need separate AI education and events. We would just have marketing events and business events. I was very wrong on that. I realized that I had overestimated how quickly adoption would happen, but I'd actually underestimated the total impact it was going to have on business and society and the economy. Welcome to AI Answers, a special Q and A series from the Artificial Intelligence Show. I'm Paul Raitzer, founder and CEO of SmartRx and marketing AI institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of AI. But we never have enough time to get to all of them. So we created the AI Answers series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing. Whether you're just starting your AI journey or already putting it to work in your organization, these are the practical insights. Use cases and strategies you need to grow smarter. Let's explore AI together. Welcome to episode 187 of the Artificial Intelligence Show. I am your host Paul Raitzer along with my co host Kathy McPhillips, chief marketing officer at SmartRx. Hello, Kathy.
0:00
Hello.
1:23
Well, I've seen you today. We're actually in the office together today, which is lovely.
1:24
It is lovely.
1:28
Good. Part of the team is here today. Everybody's kind of cramming in before the holidays, trying to get every everything done so we can take a little time off. So this is a special edition. If you are hearing our voices and not sure why you're hearing this on a, maybe on a Thursday or Friday, a new episode dropped in your feed. We do these special AI Answers episodes every other week basically. And so this is a series we started beginning of this year, late last year. I don't know when we actually started doing this. Well, the Scaling AI when we started this year for sure.
1:29
I'm not even sure what day of the week it is.
1:58
I know that's true. All right, so this is the 11th episode of our series AI Answers. This is presented by Google Cloud. The series is based on questions from our monthly Intro to AI and Scaling AI classes along with some of our other virtual events. So basically, if you're not familiar, I teach an Intro to AI class for free each month. It's on a Zoom webinar and then a Scaling AI class. Kathy co hosts that with me and moderates the Q and A at the end. Intro to AI will get anywhere from, I don't know, like a thousand to fifteen hundred people. We usually get over two thousand registrants. And so, you know, you get about a thousand or so people show up and we'll get dozens of questions. And so the idea behind the series was like, we can't possibly answer all those questions in the hour we have with attendees. So let's do this AI Answers to get through some of those other questions. And then we started doing the same thing with scaling AI. So that class we just did the 13th. So today's AI answers episode is actually based on the 13th edition of the scaling AI class. And so same deal. We get a bunch of questions we can't get to. And so in this AI Answers podcast series, we just try and answer as many questions as we can. So right now we've got 14 questions lined up for you and that's we're going to go through today. So this series is brought to us by Google Cloud. So they are partner and sponsor for a number of initiatives under our AI Literacy project, including this class, the Intro to AI and Scaling AI classes themselves is the AI Answers podcast series. And as well as a number of AI blueprints that are going to be coming out in January. We're very excited about and then our Marketing AI Industry Council that we partner with them for. So you can learn more about google cloud@cloud.google.com you can also check out their AI boost bites. We'll put a link in the show notes. It's a series of short training videos that are designed to help build AI skills and capabilities in 10 minutes or less. So, Kathy, I'll turn it over to you if there's any housekeeping items I missed here. Otherwise we'll jump into some questions.
1:59
No housekeeping, other than just saying, thanks, Claire, for always helping us get these questions organized and ready to go for this episode. So let's jump in.
3:56
Great.
4:04
All right. Number one, in many organizations, people quietly mock or look down at colleagues who use AI treating it like a shortcut or a crutch. What fear is actually driving that reaction? And what responsibility do leaders have to confront it head on?
4:05
This is an issue I've been personally noticing. I don't know, Kathy, if you've heard this a lot, but increasingly, when we think about adoption of AI within organizations, and obviously like a lot of what we think about at SmartRx is adoption of AI literacy, AI training and education, there is this definitely a perception from some that AI is not a good thing that whether it's a fear factor, like they fear job replacement or it's too abstract and they maybe just don't understand it, they feel threatened by it, or they've gotten to where they are in their career after 5, 10, 20 years being an expert and doing things the way they've always been done. And this is like a change. And maybe they just don't have the same level of confidence. So there's lots of different reasons psychologically like why people wouldn't just, you know, jump in and embrace AI. But we are definitely seeing this. And so I think this is why we often talk about when we look about AI literacy education training, it's very much a change management thing as well. And so what we've seen time and time again is people just go buy co pilot licenses or Gemini licenses or chatgpt licenses and just give them to the team. Well, there's going to be a percentage of those people who want nothing to do with those licenses. Maybe they don't know how to what to do with them, but a lot of them just don't want to have to do it. And so I do think that it's very important that we think about that. We accept that and we address that. And we do take a change management approach to the integration of AI technology and the integration of AI education.
4:18
Absolutely. Okay, number two, this is a two parter. Our first two part question. In our series we often talk about AI eliminating rote repetitive work. But some of that work, while mindless, is still productive. It creates momentum and white space and gives people a mental breather. So two questions. Number one, is there value in intentionally keeping some of that work not just for fact checking or human in the loop oversight, but as a form of cognitive reset?
5:51
That's a really good question. I'm not sure if I've ever really thought about it in that way. I totally get it. Like sometimes the mindless stuff like for me and I'm just gonna like zoom out and think like personally here for a second. Like when I'm doing creative work, strategic work, I have to listen to instrumental music. Like I can't listen to the normal kind of music. I would listen to like maybe get, get you inspired and fired up and stuff like that. The kind of stuff you do like during a workout or. But when you really need to focus on like the words and things like that. So I find that deeply fulfilling work to do. But I also love the work to this, you know, attendee or listeners perspective here where I can put my other kind of music on with like words and like where and I'm just like filling out a spreadsheet or I'm just like doing the thing that requires me to just spend an hour on the thing. And you still feel fulfilled when you're done. But the reality is a lot of those kinds of things will be able to be done by AI. And so this question of like, what is the value of that work? Sometimes it's not just doing the tasks, but the tasks themselves. Actually, like clear your mind to allow you to get back into the deep thought thing. I don't know. I mean, that's interesting. I totally get the perspective that that is valuable to go through that process. I think you're going to have to work for companies and leaders who would understand that value and be like, yeah, I get that you like doing the thing that takes an hour, but realistically we can do that in five minutes with AI, so we'd rather you didn't waste the hour. Yeah, I don't know. I could see that being a challenging discussion internally for people.
6:18
Yeah, I mean, like last week I was putting post its on the calendar.
8:05
I'm lost.
8:08
And just the mental break while I knew I was still doing something productive was just got me a chance to just kind of reset before I jumped into the next thing where I really needed to be heads down, focused.
8:09
Yeah. And I do, like, again, I didn't really stop and think about this until this question came up. But there are definitely things I do that are just kind of monotonous, but they are like a mental break for me. And I like going through the process and I. I don't know, I mean, part of this goes back to this idea. You know, I focus this on my AI for Writers keynote this year specifically related to writers. Like, just because AI can do something doesn't mean we have to let it. Like there's just tasks that are valuable to us as humans for different reasons. Sometimes the human in the loop matters and other times it's because we actually enjoy it. It's a part of our job. We don't really want to let go. So, yeah, I could see that going both ways in the future, but I think knowing what those things are that still matter to you to do are, you know, important first step for people when they're thinking about AI integration.
8:21
Right. And what gets them through that process? If they know they need 45 minutes of downtime to do something productive to get them to that next thing, then yeah, that does make sense. Which brings us to the second part of the question. In an AI enabled workspace, should productivity still be the primary measure of an employee's value?
9:07
I don't know who asked these questions, but these are really, really good questions. Should it be versus is it going to be? Are probably like, you know, the important distinction here. So should it be? My argument would be probably no. Again, this depends on what your job is, what the industry is, things like that. So if I go back to, you know, go back to my agency days. So I, if people aren't familiar, I ran a marketing agency for 16 years. Productivity mattered greatly. Like how much you got done in a one hour time period, what was the value created for the client? Like that was very important. But again, productivity versus value. If an employee spent an hour producing two outputs and they created no value for the client, and then another employee spent 10 minutes producing something that created dramatic value for the client, it's the value of the output that actually matters, not the process of just creating a bunch of things. And so my general feeling, and I certainly take this approach with Smartr X, is I don't track like when people work or where they work or like what they're actually doing down to a tactical level. Like I'm not that concerned with those things. I think about what is the value they're creating. And we definitely have people within our organization who maybe in fewer hours can create disproportionate value. And it's funny actually, this question's coming up and I'm sort of heading in this direction. I got home yesterday. So I started working yesterday at 5:30 in the morning. Now I listened to podcasts while I'm working out. So it's like kind of working out, kind of like working at the same time. And then we came in, we had multiple meetings. I did a bunch of like what I think might end up being disproportionately valuable things, like thinking about stuff we could launch next year. And so I had what was probably like a really good day and then I was fried. And I go home at like 3:30 and I was literally sitting in the living room with my wife and my kids and I'm like, you know what, I should probably work until 5pm like there's still more work that could be done. But I think I actually created a disproportionate amount of value today. Like I think it's okay if I take an hour and a half off and I don't continually be productive the rest of the day. And they were like laughing at me, but my wife's like, that's actually probably a good way to look at it. So I think maybe that is related here. It's like sometimes the value is just like, you know, what is the value you're creating or that you're setting the future value to create? Not did I, did I work nine straight hours and did I, you know, do the 15 things on my list?
9:27
Right. And by the way, 5:30 to 3:30 is 10 hours.
11:52
So yeah, it's probably sufficient. But you did the mind of an entrepreneur, like, there's no turning off. Yeah.
11:55
And we've talked about that before. Like you've walked out of your office before and I'm like right next to you and you walk out and you're like, my brain's done.
12:01
Like, yeah. And you got to know that, like for sure. Yeah, yeah.
12:08
Okay.
12:12
Definitely done yesterday.
12:12
This is question number four. Because of that two parter number four, if AI adoption is a leadership responsibility, not a technical one, what are two to three visible behaviors? Executives must model to make AI use feel safe, normal and expected across teams.
12:13
So you know, I think, I mean there's certainly technical aspects to AI adoption. There's no debating the importance of like which platforms you're going to build on and allow and how are they going to be integrated into what you do, things like that. But I do think this idea of modeling the behavior from an executive level, you know, there's some things that immediately come to mind, like one is having a point of view on this. So we've stressed in the courses I create through AI Academy and some of the talks I've given this idea of like an AI forward CEO memo where you have to have the executive saying, listen, we think this is really important. We believe it is very important to you. If you would like to continue to develop your career path at this organization, you have to demonstrate not only an understanding of AI, but a competency with it. And like this idea of pursuing mastery. So one is just setting the tone and the expectation is like the most fundamental behavior from the executive. Then it's being a part of the process. So telling people we're gonna get you co Pilot licenses or ChatGPT licenses or Google Gemini and like go do the thing. And then you know, those executives themselves have no idea what they're talking about. They're not using the tools themselves. They have no idea how to build a GPT. So I just think about being engaged in that process. So like we were talking last week, or maybe it might have Been yesterday about like running hackathons internally to like experiment with specific tools and technologies in generative AI and like involving them. Well, if I build this process, say, hey, team, we're going to start running hackathons in January, we're going to learn how to use different tools and then I don't show up. What example is that setting? So I think like setting the vision, but then being a part of executing that vision and then enabling people to like, you know, democratize innovation. I think maybe is the last piece where it's not just me or any leader top down saying, this is how we're going to use it. These are the tools you're allowed and here's the use cases and don't deviate from this. But it's more like, hey, we're going to give you the autonomy to actually figure out how to use these tools in really creative ways. We're going to put some frameworks in place to help you do that. Like hackathons and council meetings and whatever they are, workshops we're going to run together. But we want you to actually bring to us ideas to do this. So, yeah, I know those are again, for people who know how to work with. I have not seen these questions. I don't know what the questions are going to be. It's basically like we're doing this in the live class itself. So I'm thinking off the top of my head here, but those are some of the things that sort of come to me as a great way to model that behavior and then give people the inspiration and the freedom to also experiment.
12:28
And this might vary on company size, but is there value to a CEO saying outright or in some way, I know what our business goals are, but I am not the AI person on our team. I need you all to help me figure this out? Or.
14:58
Yeah, for sure. Like I, you know, most of the time the CEO or, you know, the different leaders probably aren't going to be the most knowledgeable when it comes to these things. Like, I've said that internally last week to I think a few people. Like, I wish I had more time to spend on these tools. Like, there are certain applications that I see as being tremendously valuable and holding enormous potential for what we can do as an organization. And I just don't have the time to experiment with them the way I wish I could. But I'm also trying to just be okay with that and realize, okay, my value to the organization isn't to be the expert in all these tools and be building the Gen app reviews like Mike and Claire get to do each week. It's like, I have to have the vision to do the other thing and to democratize it and I'll learn the best I can in the process, but I'm not going to be that person. And so I think, yeah, from a leadership perspective, when it's not going to be you, that's like the expert in these tools, putting the people in place that are or enabling the different people in your organization to each develop their own expertise. So that's kind of like, even when we think about the Gen AI app reviews that we do as part of AI Academy, we sort of fallen into this initial niche where Claire is the one doing audio, video, image, because that's like more her world. Mike's focusing on productivity and agents. And as we start to build that instructor network, we might diversify that and bring in other people who focus on different areas. But that's what we're kind of doing. Like, there's things. Claire's creating Gen app reviews where I would love to have an hour to like, go experiment the things like she's doing.
15:15
Agreed.
16:39
But I just accept that. Okay, that's. That's not my thing. Like, I've got other stuff I got to worry about.
16:40
I told Macy that the other day I. She was working on some social and using AI and I said, this should be a goal of yours totally to do a gen AI app series or class on what you're learning.
16:45
And I would love that if, like, our whole team down the road, like, everybody is creating genai purviews of the things that they're using for their specific roles. Like, that's part of the vision of Academy is like, you diversify the perspectives and the tools and, you know, tie it to different departments and different roles. So, yeah, I think that's a good.
16:55
Especially if they're the same tools and we're using them in different ways.
17:10
Right.
17:13
It doesn't need to just be different tools.
17:13
Correct.
17:15
Okay, number five, what are the clearest structural signs an organization is talking about AI transformation while actively resisting it? And why do leaders often mistake visibility for progress?
17:16
This is funny. Like the. So if you listen to episode 186, I sort of shared this, like, parody of copilot adoption, this tweet that somebody shared about, like, I bought copilot licenses. We called it AI transformation. And they asked how we were going to measure the transformation and I was like, dashboards. And what kind of dashboard? Like, it just. That's like, what it Is like if you go listen to 186, you'll laugh. It's a very funny tweet. Not my tweet, it was somebody else's. But I do think, like, if it's, you know, we in Scaling AI, the class that we taught that these questions are based on, we teach this like five step process of, you know, creating AI Academy. We're doing education and training, developing AI councils, generative AI policies, doing impact assessments, having an AI roadmap. And then when you, you know, go beyond there, it's like, okay, you have a center of excellence that's talking about like top use cases. You're providing, you know, workshop education and training where you're teaching people personalized use cases, you're monitoring adoption and usage of the platforms you're buying. Like there's all these other things you could look at. But I do think that the most common misstep we see is let's just go get some generative AI technology for everybody. We bought licenses and that equals transformation. It's like, no, like that's just going to have like 80% of the people maybe log in once and never do anything with them. Or maybe it's like a couple of times a month. I think we shared a stat in episode 186, if I'm not mistaken. It was like a Gallup poll that said only 10% of knowledge workers, like, use generative AI daily. I think is where we're at with their numbers. And so think about that. Like there's all kinds of perceived AI transformation that isn't reality.
17:27
Right.
19:05
And that's often what it comes down to is just buy the tech and hope people adopt it without actually going through the change management.
19:06
And if you think about AI Academy, you know, we're in the process of making sure that team leaders can look in and see the progress of their team. Because we don't want them just to buy it, we want them to use it. So what can we do on our end to make. To work with these team leaders to say, check these on a regular basis, get people in there, how can we help you? And trying to build plans for them to roll this out and operationalize all of it.
19:12
Yeah, and that's where my. I don't think you were in the meeting yesterday, Kathy, when I was talking about this, it was a different meeting, but I'm thinking a lot about this idea of like, you know, we have, we have a lot of companies coming in buying business accounts for AI Academy and in some cases, you know, more than 100 licenses. We have companies looking at buying thousands of licenses. And I think of RAI Academy, much like buying a gen AI platform like Copilot or Gemini. And it's like, that's great. Like you took the first step, you got the education and training, but now how are we going to get people to take it? And not only to take it, to actually transform their careers as a result of it and then the byproduct of that is transform the organization. Like that is not like one equals the other. We don't just get the training and then it happens. And so I'm spending a lot of time right now thinking about what can we do as SmartRx to help drive that adoption, value creation and the change management. So I think there's going to be some things we're going to be doing starting in Q1, where we'll get much more involved in trying to provide guidance for organizations to actually implement the AI education and training and make sure it's making a difference, not just something to check off the list.
19:31
Right. Okay. Number six, why do so many organizations default to treating AI as an IT initiative? And what breaks when AI isn't owned by the people closest to customers, content and decision making?
20:45
My experience has been that it is not usually where the vision for growth and accelerated change comes from. Like it is often more about protecting, reducing risk, making sure security and compliance is adhered to, things like that. So they're just not charged in most organizations with driving innovation and growth. And that is what AI to me is, is it is an innovation and growth opportunity. And so you need to be empowering the leaders of different units and teams with an understanding of what IT is, what it's capable of doing now, what it'll be capable of doing six to 12, 12 months from now. So that as they're thinking about their talent, technology, strategy decisions, they're doing IT layering in what AI is going to make possible. And that is not the domain of IT most of the time. So that it's just a pretty direct. Like if the people that are deciding hiring strategies, budget allocation, aren't the people empowered to infuse AI, then what are we even doing in business at this point?
21:00
But AI IT should be involved in your process?
22:07
Oh, definitely. I mean they, they have to be, especially in larger enterprises, but they should not be leading, in my opinion.
22:09
Correct.
22:15
Yeah.
22:16
Okay. Number seven, what is Vibe coding? Is it a fad or something with longevity? How do I get started?
22:17
Yeah, I mean vibe coding is, is basically. And again, this is like, carried over into, like, Vibe marketing and Vibe everything. It. It again, like, I, I'm going to give you, like, my understanding. We've talked about this on the podcast a few times. It's basically being able to go into these tools and just start building something. And because these tools, like in a chat GPT or Google, they're able to do the coding. You're able to say, okay, here's what I'm trying to build and it'll build it. And then you're like, okay, let's change it like this. And then you're basically iterating on the code in real time with the generative AI tools to where you can just get in and start kind of just building stuff. And so it's just based on, like, Vibes, like just the feel you have and kind of different directions you can take it. So I don't know if that's like the. I don't think there's a dictionary definition yet, but that's my perception of it is it's really just that idea of getting in and building something or doing, you know, building a campaign. Like, I'm just going to hack together a landing page and an email. Like, I'm just going to get in and just Vibe code this thing or Vibe build this thing. So I don't. I don't know if that is a trend. I think it's just a term right now that sort of seems to have stuck for the last nine months to describe something, but it basically just means going in and iteratively building something. You could do it with research, strategy, documentation, organizational design. Like it. It's just sort of a term. But most of the time this is referring to building actual applications or software.
22:23
Okay, number eight, if you could go back to the very first AI show episode and correct one major prediction or assumption you had about AI, what would it be and why? And what has changed?
23:46
Well, the very first AI episode was probably an interview with someone, because people might not know this, but it originally started as the marketing AI show. And my plan with it was to interview thought leaders in the space, entrepreneurs, people within AI labs, things like that. And I just never did it because, like, to actually do interviews is hard. You have to. It's like a whole production thing and you got to figure out timing, and it became a pain. So our podcast was just sort of like hanging out there and it wasn't happening. And so in October, I think 2022, right before chat GPT came out, I went to Mike, my co host and our chief Content officer. I was like, hey man, like we're just sharing all these links each week. Like, what if we just started a show where, you know, we basically just curate the best stuff from each week and we just talk about it. And I didn't even heard 30ish. What's that?
23:58
That was episode like 30, maybe 20.
24:50
2020, maybe somewhere around there. And like we didn't even know like what the metric was. You monitored to know if a podcast was working. Like, I had no idea how you tracked these things. I didn't know downloads was like the thing that was like, you know, the best KPI to look at. We just started doing it just to talk. And so we didn't let all these links each week go to waste. Like there was some point to it. And so I don't know that early on I was making many predictions, but like, let's just assume that, yeah, I did back in 2022, I would, I would have a hard time honestly finding something where I was just blatantly wrong. And I don't mean that in any kind of arrogant way. Like, we've tried to be very objective about what we think was going to happen. We've tried to be very conservative on timelines with which I thought things would happen. We try to be very thorough in our research so that it is not just me making ideas up and just throwing out crazy predictions. So the model we have for the show is very objective and research driven and follows journalistic methodologies. That being said, the closest I've come to making predictions, I would say at a broad level, would be like the road to AGI, the AI timeline, where I've sort of projected out how agents would emerge and robotics and things like that. I've done that two years straight and I. I wouldn't change anything on that timeline yet. So that one is something where I've actually feel pretty confident about. The thing I will say at a very broad level that I got wrong is back when I started the Marketing AI Institute in 2016, I actually thought by 2020 I wouldn't even need to have AI in the name of. I thought that by 2020 AI was going to be everywhere, everyone would have already adopted it and we wouldn't even need separate AI education and events, we would just have marketing events and business events. I was very wrong on that. And so that actually changed for me in 2021. I read Genius Makers by Cade Metz and I realized that I had overestimated how quickly adoption would happen, but I'd actually underestimated the total impact it was going to have on business and society and the economy. And so that's the day I decided to sell my agency and focus on AI exclusively. That was spring of 21. So I would say at a broadest level, the speed at which adoption would happen is the one thing that I feel like I just missed by like a half a decade. But since then, I think because we basically try and look like six to 12 months out, and I usually project things based on things I'm hearing and seeing firsthand. Yeah, I would have a hard time, like, picking, saying, yeah, I was just completely off on. On this. I mean, in our book in 2022, nine months for ChatGPT, I wrote a section called what happens when AI can write like humans? And we predicted, not ChatGPT, but that, like, we were basically on the cusp of something like ChatGPT Emerging, where it was going to change everything. And that ended up being pretty right.
24:52
Okay, number nine. Listeners have asked you questions about careers, ethics, strategies and tools. What is one listener question that has fundamentally changed how you think about AI rather than you changing them, how they feel about it?
28:04
Oh, man. So I actually have a pretty good answer for this one. I don't remember what year it was, but I think we were at Content Marketing World, and I know I was, like, catching a flight to Boston that night, and I was sitting at a networking luncheon and someone said to me, what are you most excited about with AI? And I froze. Like, I literally just stared at the person. And I was like, I. I actually don't know because there was so many things building up at that time, kind of like today where I could just sit down and list for you the hundred things that I was worried about. But what I was excited about was very difficult in that moment to say, and I did not have a good answer. And so on the flight that night, I actually forced myself to write, what are the things that I am excited about? And so that ended up becoming a key part of my keynote for Macon that year is I actually featured those things on the final slide of that presentation. And so it was things like a golden age of entrepreneurship that I thought we were just heading for this thing where, know you, anybody could build anything. Like, the walls were coming down to start businesses. I thought like a renaissance in creativity of creatives who could work with the AI was going to be amazing. I thought that, you know, basically the whole premise of my keynote and what became like, the final piece was like, we could create more time, more time for the things we cared about. Family, friends, working on, fulfilling projects. And that was why I was pursuing AI, was to create more time in my life. And. And so, yeah, that was the one that stuck with me. That honestly, like, changed the trajectory of how I even talked about things on the podcast and probably some of the trajectory of how we thought about things as an organization and, like, what my points of view on it were and my own personal need to wake up each day and find the positives in it because I was getting overwhelmed by the downsides of it.
28:19
Yeah. Do you remember who that person was?
30:19
I don't. I'd never met them. I'd never met the person before. It was. It was a lady. I remember that. And, yeah, I. It was just. It was a totally random. I wasn't even talking to her. I was just sitting there eating, and I think she knew who I was and maybe listened to the podcast or something, and she just asked me, and I was froze.
30:22
Like, here we are.
30:41
Yeah. Weird.
30:42
All right, let's see. Number 10. What has been the most personally challenging part of leading conversations about AI's impact on jobs, identity, and the future? And how do you personally manage that weight?
30:43
I mean, my first instinct was to say, people don't believe me, but that doesn't actually bother me. I've been used to that. Like, I've spent a lot of my career being people thinking I was wrong about things that I had high conviction in. And then, you know, eventually it became apparent that I was probably more right than wrong on some of these topics. So jobs is the one that I very consciously avoided talking about for a while on the podcast. I don't remember what episode it was, but I remember staying in the kitchen, talking to my wife, and I was like, I think I have to, like, I have to just say what's on my mind. And then I told Mike. I was like, listen, on the next episode, I'm just gonna, like, lay it out. Like, I think we're. We're in for a world of hurt. No one's talking about it. Like, we gotta start this discussion on the podcast. And. And then I just said it, like. And I kind of broke down why I thought it was gonna happen. And the conversations I was having that, like, led me to believe that. So I think you have to be willing to be perceived as wrong for a really long time. But anyone who's been an entrepreneur who's taken, like, huge risks to innovate in any area knows that feeling. And honestly, like, I've. I think I'm more comfortable in that environment. Like, if I'm at a point where everyone agrees with me, then I feel like I'm not challenging status quo enough and that I'm actually falling into, like, I guess, more standard belief systems. And my experience in my career has usually been that if I'm on the frontier of thinking about things differently, that usually leads to positive things. So I, I get uncomfortable when everything is like, everyone's like, yeah, yeah, no, I totally agree. Jobs are going away. Like, it's great. So, yeah, I don't, I don't. I think the challenging part is just being willing to be perceived as being wrong and, and being okay with being the one that's willing to get out there and say it before it's consensus. Personally managing the weight, like, the weight for me is more the things I have high conviction in, like, this disruption I've seen coming for jobs for years. When I first realized the impact this was going to have on, like, artists like my wife and writers like you and me. Kathy. Like, when I knew that years before the masses, like, came to know that stuff, like, I'm talking back in, like, 2012 and 13, when I sort of played this out and saw what was going to happen, I would say it's the weight of knowing the personal impact this is going to have on other people, whether that is job loss or things they find fulfilling in their life that I know are going to change. And it's hard to be the one that, like, knows that. And you don't even know how to say that to people. So I would say more. It's more that for me. And then I think the fact that I, I have a pretty decent understanding of how this probably goes wrong in society, and I can, if I sit down and allow my brain to go in that direction, I can tell you a lot of the pain and problems that are going to occur. That sucks. Like, I don't like that feeling. I don't like looking out and kind of having a decent sense of how this is going to play out in bad ways while everybody else is sort of blissfully unaware. And I get asked a lot of times about this sort of stuff, and I often actually, like, I just hold back on that part of it because I don't. I don't think people are really ready for it. Like, a lot of times, especially with family and friends, it's like, you just have to. You have to be conscious of how people, where they're at with their understanding of this stuff and how much you can actually put on them at once.
31:00
I think there's things like AI literacy projects, the free classes, responsible AI manifesto, talking to people, going to school, doing all these things that you also have this responsibility to be pushing the right side of this forward in the world.
34:33
Yeah. And I think part of that comes with how we position this. So I have always had this sense of urgency without creating fear. I guess it's sort of like a guiding principle. We have SmartRx and that is why the literacy project exists and why we do invest all the time doing all these free things, whether it's a podcast or the classes or the talks at schools. I feel like the way I manage the weight of the negative stuff that I, you know, that is current and future around AI is we just go do something every day to make a difference. And if we were just sitting still and not doing that, then I would probably be losing my mind right now. But I do feel like what we do matters in some small way and I feel a sense of urgency to do more of it because I think it makes a difference and it empowers other people to go out and make a positive impact. And I think we still have a say in this. Like, I don't think it's inevitable that it has all these negative outcomes, but we got to just, we got to do more as a community.
34:49
Okay, number 11, you've talked a lot about AI roadmaps and scaling AI inside organizations. Looking back, where do you think most companies actually over invested in AI and where did they quietly underinvest in ways that will matter long term?
35:48
They over invested in buying gen AI platforms for all their employees before they taught them how to use them or gave them use cases for them. So like that's the example I referenced, sort of the parody tweet. But like, yeah, we spent 1.4 million on copilot licenses and no one's using them. And like that's the over investment is they thought the answer was just go buy generative AI tools, but then they didn't invest in training people and doing change management. So that has been the problem since early 2023 when people were racing to go get these tools and they have not invested in the people and you know what's required. And people are complicated and there's logical and illogical reasons why they don't want to be a part of AI. And if you just ignore that as an organization, you're going to fail when it comes to this stuff. So yeah, over invested in the tech too early by underinvesting in the people and the change management, I guess, would be my answer.
36:03
So talking about people, you know, we're looking at hiring some people. Some people are right now, I know, are looking for work. Companies are, might be looking for people. Like, what are some things you would say to folks who are in that position? Like, what could they be doing right now to, to get ahead of this a little bit or a lot?
36:56
I mean, it sounds like a broken record, but you just have to invest in AI education and training for yourself. Like, you have to level up your understanding of the technology, what it's capable of, you know, Again, we offer a ton of free stuff. Go to the intro class, come to the scaling class, go back and listen to the last 10 episodes of the podcast. You'll get a real good sense of just sort of where we are. State of AI. What are the important things going on right now? You know, if you can afford it, take, take some of our paid courses and get certificates. Go take free classes From Google and OpenAI and others who offer certificates and their platforms. Like, you can go get educated and I really fast. And then the competency is use it every day in your personal life. Like if you're not using it in business and maybe you're not provided a license, go, you know, plan a trip, plan your holiday menu. Like, do what? I don't know, like just experiment with the tech, use it, build a workout plan. You know, figure out how to coach your kids in math. Like, just use the tools and get used to them and then layer that over your capabilities. I mean, the jobs report came out today. It sucks. Like, it is not good. We are, we are in a not great economy right now that is being bolstered by the investment in AI infrastructure. If you stripped AI infrastructure out of the economy right now, we're in a recession and there's a lot of people already losing their jobs and it's not being directly tied to AI. In a lot of cases that's going to change. Like, there's going to be way more job loss in Q1 of next year and then probably the remainder of the year and it's going to be directly tied to AI. If you're talented at what you do, layer AI capabilities over it and go find companies that are trying to build AI native AI, emergent that value those skill sets. Because I don't know, like, I don't know how else to do this. Like, you have to just be willing to be the person on your team or the person in your industry who is racing ahead to try and figure all this out. And I think that's the best chance you have to thrive through a lot of uncertainty. It is not going to be an easy period for jobs. I wish that weren't true, but I mean we can see it just through Smarter X. Like we hear from people every week. Kathy, you probably hear from more people than I do. Really talented people who are on the market or, or know they're going to be on the market in the not too distant future and they're trying to be proactive about it. So that, that's kind of. I wish I could make it better for everybody but all, all I can do is keep growing our company and trying to create job opportunities here. That's that like new age of entrepreneurship, just build something. And trying to educate organizations to like level up their people so they can drive innovation and growth and not remain stagnant in the economy.
37:13
Okay, number 12, if you are starting a brand new company tomorrow, that had to be AI native from day one. What is one thing you would refuse to automate no matter how good the tech gets and why?
39:53
Vision and strategy. Like I as a leader, I use ChatGPT and Google Gemini all day and I say both of them because I literally use both of them. If it's a really important strategy, I will talk to both AI assistants about it and I'll compare the outputs and things like that. But the vision for where we go, the goals we set as an organization, the strategies of how we get there, the people we're going to hire, every element of that is AI assisted right now for, for our company. And I would think of us largely as an AI native events and education company. At SmartRx, I don't turn any of it over to the AI assistant though. It's just my thought partner and it's my thought partner for finance decisions, legal decisions, HR decisions, business strategy decisions. I talk to it about all of it and it's because it just gives me somebody 247 that I can bounce things off of. And like a lot of times like really important decisions I've made, I've been in business, you know, 25 years now. I've owned my own companies for 20 of those 25 years. And a lot of the most important decisions I made, I was probably just sitting there talking with my wife out loud about things and she just is a great listener and every once in a while she'd ask like really insightful questions. But normally it's just the process of saying out loud the thing I'm Trying to decide or like the direction I'm trying to go that enables me to kind of make an educated decision. And so AI assistants function in that way. I actually, I was telling Kathy and the team this morning. I was on my drive in this morning, I was talking to my dad. Like sometimes I just call him on the way into work. And we were having a meeting this morning on an important business decision we had to make. And. And so I was telling my dad, I was like, yeah, I'm just like, I'm trying to decide this thing. And like, and like part of that is just like keeping my dad in the loop of what's going on in my life. The other part is like I realized like I'm just trying to think out loud. Like I'm just trying to get this stuff out of my head. And then going through that process, I sometimes just like, oh, okay. So that's what AI assistants are. But often what I'll do. Like I did this this morning before that meeting. I was trying to decide another even bigger decision I have to make in the next 30 days. And I said at the end, like asked me, you know, here's the basic premise, here's the context, here's what I'm trying to decide. I want you to make a recommendation, but if you don't know all the context, like, ask me one question at a time, like as we go. And Jeff woods talked about this idea of like, give me one. And so I did. I had like this 30 minute conversation with AI that now when I go meet with my attorney in like 48 hours, I now have like way more context and understanding and I actually have a point of view on the decision that has to be made that I wouldn't have had if I didn't talk to my AI assistant. So I don't know. I think at the end of the day there's still the human has to make the decision, especially when it's related to the vision or strategy of an organization or a team. But the best leaders are going to infuse AI to help them make more informed decisions.
40:05
Shout out to your dad, number one.
42:55
Yeah. Who I'm sure will be listening to this episode.
42:56
So.
42:58
Hey, dad.
42:58
Okay.
43:00
He's like our number one, you know, fan of the podcast.
43:00
So number 13, you posted in link on LinkedIn last week and in your exec AI newsletter, your favorite podcast you listen to for AI news. Aside from obviously liking the show and guests and gleaning value from them, what which might be enough, what is your measure for Adding a podcast or other medium to your trusted sources.
43:03
So the ones that get expanded, so the ones I don't know that I add to that list is usually because they had a phenomenal guest. So, I mean, that's almost exclusively probably how it happened. It's like some of the best ones, like, 20 VC. I love that podcast. I had no idea. Like, I didn't follow them. I didn't know that. Dwarkash, like, just kills it. He has amazing guests. Lenny's podcast, never heard of it until, like, three months ago. And it was like, I forget who he interviewed. That it was just a phenomenal interview. So, like, boom. You know, subscribe.
43:22
Like, that's a great podcast.
43:56
Yeah. 80,000 hours. There's people that have access to people that I don't. They have inside access to other people in labs and different entrepreneurs or in the VC world. So that, you know, it's companies, they fund, things like that. So usually it's through Twitter. X is where I capture the vast majority of information that, you know, informs my perspective on AI by following. You know, there's about 300 or so accounts that I get alerts from, and that's usually. It's like, if it's important enough, someone in that group of 300 is tweeting about it. And so if I see a clip from a great interview, it's like, where's that from? And I'll go find the YouTube clip or the podcast clip and I'll add it to my list. So I would say nine times out of 10, when a new podcast finds its way, it's because I saw a clip of a. Of a great segment on. On Twitter. I do not ever go into podcasts and search for, like, new AI podcasts or anything. I've never done discovery through the podcast network itself. It's always through clips that I see.
43:57
Okay, so you posted that on LinkedIn last week.
44:54
I think I did. And then in our executive newsletter, I included the links to all 18 of them.
44:56
Yeah. Okay, last question number 14. We talked this morning in an internal meeting about simplification. How can listeners think about simplifying how they're thinking about piloting and scaling AI?
45:00
So the guidance I usually give on this one is don't overthink this and get overwhelmed by, like, the hundreds or thousands of tools and all the new funding each week and all the new models. And, like, it's overwhelming. Like, I live this stuff every day, and there's definitely some days where I'm like, oh, my God, I don't Even want to look at my Twitter feed today. Like, there's just. I don't want something else new today. So what I generally guide people is depending on what license you have, either personally or professionally. Just get really, really good at that AI assistant. Like, you can't go wrong with anthropic Claude, Google, Gemini, ChatGPT, you know, Microsoft Copilot. If you just learn the capabilities of those tools, image generation, video generation, the reasoning capabilities, doing deep research, talking it to like an assistant, asking it great questions. If you just learn how to do those things, you're going to be ahead of like 95% of users of these platforms and generative AI technology overall. Don't get lost thinking you've fallen behind everybody and everyone else has this figured out. They don't. You may live in a bubble where you're hearing everyone talking like they've got this all figured out. That is not the norm. Again, go back to the Gallup poll we talked about this week. Like 10% of corporate workers or knowledge workers overall use AI daily. There's a massive opportunity to be a power user and still be an early adopter and innovator. But just focus on using one of those AI system platforms to the fullest extent and you will create enormous value for yourself and your company.
45:15
Amazing. All right, that is the end of.
46:53
Our questions for today and our second to last episode of 2025. Right. This will come out on Thursday, so we're recording this on Tuesday, December 16th. This will drop on December 18th. Yeah. And then we will have our final weekly episode will be on December 22nd. And then we will be back. January 6th would be the next podcast episode. Yeah, sounds right.
46:55
Sound. That sounds right to me.
47:17
I totally forgot we were doing this one. I think. I think when I did the closing to episode 186, I was like, all right, like, we'll talk to you next week. That'll be the last episode. And I looked at my calendar today and I was like, oh, right, we answer still. All right, well, yeah, thanks, Kathy, for. Yeah, I think we've been doing this for almost a year. I don't know. There's 11 of them. Maybe it's been six months. I don't know. This whole year is a blur. Well, but thank you to our listeners.
47:19
Matt would say it's been almost a year.
47:44
All right, well, hopefully this is really helpful for everyone. We're definitely going to keep doing this series in 2026, so stay tuned for not only the weekly episodes coming on Tuesdays, but AI answers and then we got some other new exciting things we're going to do an AI Transformation series. I don't know if I've shared that before, but we're working on, like, a new podcast series that's going to be interviewing people who are going through and leading transformations within their organizations and teams. So we're really excited about that series that'll launch in early Q1 next year. So lots more coming on the podcast again. We appreciate everyone being a part of it, and we will talk to you again soon. Thanks, Kathy.
47:45
Thank you.
48:18
Thanks for listening to AI answers. To keep learning, visit SmarterX AI, where you'll find on demand courses, upcoming classes, and practical resources to guide your AI journey. And if you've got a question for a future episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions about AI.
48:20