#177: AI Answers - AI Ethics, Flagging AI Content, AI Accuracy, Book Recommendations, & AI Intellectual Property
Paul Raitzer and Kathy McPhillips answer 13 audience questions on AI ethics, deepfakes, intellectual property, hallucinations, and practical business applications. The episode emphasizes that AI, like the internet, will enable both tremendous benefits and serious harms, requiring human-centered responsible practices rather than relying solely on policy or technical safeguards.
- AI's moral impact mirrors the internet's early days—the net good justifies building it while managing emerging harms through responsible principles rather than waiting for perfect regulation
- Hallucination mitigation prompts have limited effectiveness; human verification remains essential regardless of technical workarounds or system improvements
- Deepfakes and AI-generated content require crisis communications planning now, not reactive response—companies must game-plan scenarios and assign accountability before incidents occur
- Authenticity and human connection are becoming competitive advantages; unscripted expertise and in-person events drive trust more effectively than AI-generated content
- AI adoption requires both optimization (10% efficiency gains) and innovation (10x business model transformation) thinking; optimization alone leads to obsolescence against competitors pursuing radical change
"The best parallel to artificial intelligence is probably the advent of the Internet. And if we go back to like the early 90s, when it really started being available to consumers, you could look forward and say, wow, the Internet is going to enable a whole bunch of really horrible things... Should we build it? And the answer is probably 100 times out of a hundred, yes."
"There are going to be horrible things that happen as a result of AI. Some of them I could sit here and list for you. Some of them I don't even want to conceive of. And in the process, we're also going to solve diseases and figure out how to create abundance in terms of energy."
"Just because AI can write doesn't mean it should. And like, how do you know when to just write yourself versus let the AI do the thing? And for me, like, the biggest factor comes down to authenticity, like expectation of authenticity."
"Optimization is 10% thinking. Innovation is 10x thinking. Everybody is going to be doing that. But if you have competitors who are looking at this as saying, yeah, but how do we reimagine the entire thing? Those are the people who have the 10x thinking."
"The best work comes from focusing on one problem at a time and nothing else. Constant struggle to live it."
The best parallel to artificial intelligence is probably the advent of the Internet. And if we go back to like the early 90s, when it really started being available to consumers, when we started getting access to the Internet, you could look forward and say, wow, the Internet is going to enable a whole bunch of really horrible things. There's going to be a dark web where all kinds of horrific things happen. There's going to be all these cyberbullying, and all these things will emerge from this thing we're calling the Internet. Should we build it? And the answer is probably 100 times out of 100, yes. Welcome to AI Answers, a special Q and A series from the Artificial Intelligence Show. I'm Paul Raitzer, founder and CEO of SmartRx and marketing AI institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of AI. But we never have enough time to get to all of them. So we created the AI Answers series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing. Whether you're just starting your AI journey or already putting it to work in your organization, these are the practical insights. Use cases and strategies you need to grow smarter. Let's explore AI together. Welcome to episode 177 of the Artificial Intelligence Show. I'm your host Paul Raitzer, along with my co host Kathy McPhillips, chief marketing officer at Smarter Directs. Welcome, Kathy.
0:00
Thank you.
1:32
Always wonderful to have you on the podcast.
1:32
Oh, I appreciate that.
1:34
This is part of our AI Answer series. So this is not a replacement to Mike. Mike is still the co host for all of our weekly stuff. But Kathy and I do these special AI Answers episodes which are presented by our partner Google Cloud. So this is a series based on questions from our monthly Intro to AI and Scaling AI classes along with some of our virtual events. So this is the 8th one of these AI answers episodes we're doing. So every once in a while we'll drop a second episode each week, the Intro to AI and Scaling AI Classes. If you're not familiar with them, you can learn more. At SmartRx AI we do free intro classes. We've done 52 of them now, which is actually this episode is from that 52nd episode of the Intro to AI Class. And then we do scaling AI classes each month and those are completely free. We had over 2,200 people registered for Intro to AI this week. That was on. That was just yesterday, right? October 28th. Yeah. So. So what we're going to do here is we're going to answer questions we didn't get to during that class. So I usually present for about 35 minutes, and then Kathy and I go through a bunch of questions. But again, we had a lot of people on that class, and so there's a whole bunch of questions we couldn't get to. So that's what this series is all about, is kind of simple answers to questions that our attendees have about AI. So, as I mentioned, this is in partnership with Google Cloud. They're the sponsor not only of this, but our AI Literacy project as a whole. They help us with a lot of those initiatives. So we have a great partnership with the Google Cloud marketing team. They sponsor not only the AI Answers series, they do the Intro to AI and Scaling AI classes as well, and then a collection of AI blueprints. Plus we have a Marketing AI Industry Council that we run with Google. So you can learn more about google cloud@cloud.google.com and then we've been mentioning these. They have these new AI Boost Bytes, which are great. It's a series of short training videos that are designed to help build AI skills and capabilities in 10 minutes or less. So check the show notes. We will put a link to their AI Boost Bytes series in there. So, Kathy, I'll turn over to you, you can give a little bit more context of how this all works and then we'll dive in. I know you sent me a brief. I have not looked at it, so I love being surprised by the questions. So I'll let you run it from here.
1:36
I sent you the brief for, like, my sanity, not. Not for yours. So we recorded intro to AI52 on October 28th. As Paul mentioned, it is the morning of October 29th. It is 10:04am Eastern when we were recording this. And what we do is there's 20, 30, 40 questions we didn't get to every episode. So Claire, who, if you're a podcast listener, you've heard us talk about Claire, she runs through, does a dive into the questions, makes sure that they haven't been asked on a previous episode. And if they have, we try to mix it up a little bit. And then we're going to go through these questions now, whatever is left, or some ones that we wanted to ask again live on the podcast.
3:51
Sounds like Wayne.
4:30
Let's get started with number one. We're jumping right. We're jumping right in. So is AI good or evil? What is safe for a business environment?
4:31
Wow. Okay, so the good or evil? So it's interesting, I was teaching a class, a journalism class last night at Case Western Reserve University actually, and I love the questions I got from students. And a number of them were actually related to this idea, like is it good, is it bad, how is society reacting to it? So this is becoming a more common thing. It can be both. Like I often say, the best parallel to artificial intelligence is probably the advent of the Internet. And if we go back to like the early 90s, when it really started being available to consumers, you know, we started getting access to the Internet, you could look forward and say, wow, the Internet is going to enable a whole bunch of really horrible things. There's going to be a dark web where all kinds of horrific things happen. There's going to be all these, you know, cyberbullying and all these things will emerge from this thing we're calling the Internet. Should we build it? And the answer is probably 100 times out of a hundred, yes, that the net good for society is, makes it to where you deal with the ramifications and the negatives as you're building it. You figure those things out and then you try and create as much benefit for society as possible. I think of AI in a lot of the same ways. There are going to be horrible things that happen as a result of AI. Some of them I could sit here and list for you. Some of them I don't even want to conceive of. And in the process, we're also going to solve diseases and figure out how to create abundance in terms of energy. We're going to hopefully create new levels of human fulfillment. Like I think all these incredible things will happen. It is not going to be a straight line and it is not going to be without very messy, painful parts of the process. So that's, that's why I say it's both. It is good and it's evil. What is safe for a business environment? I mean, it's pretty broad question, I would say, overall. But you know, I think that we teach a responsible approach to this, that it should be a human centered thing, that whatever you're doing in a business environment, you should think about how it positively impacts the people within the organization. There are definitely going to be some companies that look at this as a replacement to people and they will do that as quickly as humanly possible. They will find ways to automate jobs so they can just have fewer people working. I don't think that that'll be the norm. I think that that'll make the big headlines and there'll be all kinds of stories about that impact in a negative way. But I think most companies, especially privately held companies, like with good people running them, they're going to look at this as a way to create fulfillment for their people. Like I think about this all the time as we scale, Kathy, it's like how do we enable people to spend more time with family and friends while still and you know, in their own well being. Like how do we enable that while still scaling a company really fast? That has a really broad impact, but we can do it in a way that like we can do the work of what would have taken 10 people before with a smaller team. And so we're going to keep hiring. Like we'll probably double our staff next year, but we probably would have had to have 4x our staff to do what we're going to do next year in a normal business environment before generative AI. So that's how I think about it. And again, I think it's going to be independent choices that each company and each leadership team will make about the impact it has on their business.
4:41
Yeah, and I think we look talk a lot about AI policies, but I think those AI policies are more like human behavior policies. We need to make sure that here's the reasons we're doing all these things, but really it comes down to how is the human using all of these tools for good or for evil?
7:55
Yeah, we can put the link to our responsibly I manifesto that I wrote in 2022 if I remember correctly. Or maybe it was like early 2023. It might have been 2022 I think, but the whole premise was that as we go through this phase, like it has to be human centered. Like there's 12 principles I outlined and it's creative, common, like anybody can take these principles and use them. So yes, I think you're right, Kathy. So much talk is around generative AI policies and preventing risk and like guiding people how to use. And not enough talk is about responsible AI principles which should be part of that, which is how do we use it in a responsible, human centered way, not just for our own employees, but for our customers, our community, all of our stakeholders.
8:12
Absolutely. Okay, number two, then we asked this live, but I wanted to ask it again. Is AI a vector for viruses or Trojans?
8:51
Yes, it is. Mainly the computer use agents is the biggest concern here. So what I mean by that is ChatGPT Atlas is the browser we talked about in episode 176. So we went Pretty extensively into this. I will mainly say, go listen to episode 176 to get more concrete examples of how this works. But in essence, once you allow the AI to sort of take over your computer and go and do things on your behalf, like click around websites and copy and paste text, things like that, it starts to open you up to nefarious ways that people can very creatively take over your computer, inject things into your drives that can do things you don't know are happening. So people in the IT world are thinking deeply about this. They are working on this, the labs are working on ways to prevent this. But as a user, just know that specifically when it comes to agents and agents that can go do things on your behalf on your devices, there are entirely new risk factors that start to emerge. And so don't just jump in and say, oh, great, I can use ChatGPT Atlas and let it take over my computer, or I can use Google Chrome's, you know, computer use Agent. And Anthropic has one. And like, just because the tech is there doesn't mean it's fully tested. And safe is kind of like the thing. So in a corporate environment, listen to it like, this is a situation where, yes, I know it can, it can slow things down, but you have to rely on the experts who actually understand this stuff because they're trying to keep you and the company safe.
9:00
Right? And it's such an easy thing to do. It goes back to the principles and the guidelines. It's easy thing to do. Oh, this will save me so much time. Oh, it's Google and this company, you know, it must be safe, right? Or it must have been tested thoroughly, that I know this is going to protect me and my company. But asking the question, you know, not waiting for it to come to you and say, yes, this is okay, go to them and say, is this okay?
10:30
Yeah. And we even cited the Chief Information security officer from OpenAI on episode1676 who was explicitly saying, this is not fully tested technology. Like, there are risks and we're aware of them and we're trying to resolve them, but they put the product into the world anyway.
10:52
Yeah, absolutely. Education, education, education, right?
11:08
Yeah, yeah.
11:11
Okay. Number three, if we're using AI information, can we be sued? If AI is pulling intellectual property, this.
11:12
Is one that comes up a lot. But I don't think people fully, like, comprehend this. So an example here would be like, let's say when Sora 2 first came out from OpenAI a few weeks back and they hadn't put the guardrails in place for intellectual property. And so you could create Disney characters doing things. You could create actual people doing things, celebrities, politicians, your CEO, like you. You could do whatever you wanted. And there was like almost no guardrails in the first 72 hours. So the question becomes, to make this tangible, just because OpenAI allowed you to create south park characters or Marvel characters, does it mean that you are liable for actually doing it? I don't know what the case law would say here. I don't know that it's been defined quite yet. But I am under the assumption that you should be cautious in creating copyrighted material, trademark material, when using AI, and don't assume that the liability lives at the lab level. You know, Disney, all these companies, they're going to sue whoever they can to protect this, and they're going to go after the big guys first. But that doesn't mean that you don't have some liability as an individual user. So you have to understand the terms of use. You have to. You know, I don't know, like I always say, you have to have these moral clauses because the law is not going to keep up with as fast as this is moving. And so you have to decide from a moral perspective, am I going to do this? Like, from an ethical perspective? And that's why generative policies are so important within organizations, is you have to have these sort of principles in place about how you will act and behave, even if the law is uncertain in some of these areas.
11:19
And that's the Responsible AI manifesto, one that I use all the time, as, like, legal precedent is lagging so far behind. Like, we just need to make the right decisions.
13:00
Yep.
13:09
Okay. Number four, is there one AI company that's more ethical than others, especially around environmental impact and data sourcing?
13:10
That's a tricky one. So, you know, I would say it's in the eye of the beholder. You could say, certainly that anthropic seems to take the higher moral ground. At least that's what they say. They want to create that perception that they have a greater focus on. On safety. And yet they, like, they got a $7 billion fine for. Or when. I don't know what the total fine was. But the. They stole 7 million books basically to train their model, and they actually had to pay fines for that. So I think I mentioned this. I don't know which thing I was on this week where I mentioned this, but, like, you can go search and see if your books are in that training set. Two of my books are ironically not the one I wrote about artificial intelligence, but my other two books are in there. And I think I'm actually eligible for 3,000 dol per book as a fine against anthropic for stealing 7 million books. So even the ones who present themselves as being more ethical, all, they all stole copyright material. Like now stole is like again, a relative term. It may be found that it was fair use and they were allowed to do it, but we don't, we don't know. So there are some, like Adobe tried to do ethical training of their models early on. From an image generation standpoint, I don't know that that worked so well because if you do that, then you basically have to restrict the training data to things you are permitted to use. And that is a much smaller universe of data. And so it affects the quality of the models. And at the end of the day, I don't know that consumers care enough to use an ethically sourced model. They just want the model that works best. And so I think that's kind of where Silicon Valley landed on this is like, screw it, you know, people don't really care that much. There might be a small percentage of people and we'll figure it out and pay the legal bills later. Like, let's just go hoover up everything we can possibly get and we'll train these models. So, yeah, I don't know. I mean, there may be some small niche players that are a little bit more ethical about this, but generally speaking, they've all did the same thing when it came to training the models, and they continue to do the same thing when it comes to training them.
13:20
Yep. You did reference environmental and ethical AI in episode 163 if people wanted to go back to that one and take a listen.
15:25
Yeah. And environmental same. Same thing basically applies. You need to use a bunch of compute and a bunch of energy to train the biggest models. There are some people training smaller, more efficient models that are going to be better on the environment. But at the end of the day, like we fast forward five years, the major pull on energy and the major impact on the environment isn't the training of the models, it's the inference. It's all of us using intelligence in everything we do in our devices, in our software. Yeah, that's what's going to end up being the thing. And so it's kind of hard to prevent that from happening at this point.
15:34
Yeah. Okay. Question number five. Someone told me to add a prompt to exclude hallucinations to avoid problems. Is that accurate?
16:10
I doubt it. I haven't heard that. So to unpack that a little bit, if people aren't familiar. So hallucinations is the technical term used by the labs. That means it just makes stuff up. Like it gets stuff wrong. So if you ask ChatGPT to help you write a research report and it does it and it looks incredible, but then you start peeling it back and you realize, wow, it just completely made up a citation or the book it's citing doesn't exist, or it was a different author than what it says, or it gets a date wrong or a person's name wrong. Like it just makes stuff up. I don't know, like telling it to think harder or like, check your work. I could see those things possibly having an impact. I can't see them removing the human in the loop of still having to verify everything. Like, I guess it's possible. Like these models are weird. Like little things. Like before we had reasoning models In September of 04 or 2024, you used to be able to get them to improve their outputs by saying like, think harder about it. And like nobody really knew exactly why it worked, but it just did. So, yeah, it wouldn't shock me. I don't know about exclude hallucinations. I could see something like, check your work, make sure your citations are accurate. Go search any citation and confirm the data. Like, I could see things like that possibly making an impact, but I also imagine that all of that is going to be baked into the system prompts for every one of these models anyway. They're, the labs are trying to reduce the hallucinations and I'm sure that they've thought of all the prompting tricks they can on the back end for the system prompt itself, but it might make a meaningful difference. Not enough for you to, I guess from an analogy perspective, take your hands off the wheel. Like, you still gotta check it.
16:20
Yeah. Moral of the story is don't put that in there and then say like, oh, good, it's done.
17:59
Yeah.
18:03
Yeah. Okay. Number six, Is it helpful to use one AI tool to fact check another? Like using Chat GPT to check Gemini?
18:03
Potentially, I, I have done this. I, I will take an output from Chat GPT and I'll throw it into Gemini and say, can you assess this? Can you edit this kind of thing again? I, I think it's probably, sometimes it might work, sometimes it doesn't. It doesn't remove the need for the human to still be the one that verifies it and still holds the, the authority of putting the thing into the world and Being responsible for whether or not it was actually correct. But I could see it being a layer of kind of like having someone edit something. You know, if I write something and I pass it to Kathy and say, can you edit this? But then I'm still the one that publishes it. At the end of the day, if there's something wrong in that, it's on me, not Kathy. She was my editor. And so that's kind of how I would feel about this situation. I would equate it to a human fact checker where you as the author still hold the end game responsibility. But it might be helpful and it might, you know, you could say, hey, check the tone, check the style, check the grammar. Yeah, I, I could see using it like that as an assist, not as like a final end product kind of thing.
18:11
Right. And I mean, and the diversity of the model training, does that matter?
19:10
Yeah, I don't know. The models end up sort of functioning in a very similar way. The thing that makes them different is the system prompt from the lab to tell it how to behave and what sort of personality to have and stuff like that. And you're going to be able to customize those things yourself. Like right now you can actually go in and change the personality of ChatGPT so that it's like, you know, more positive and happy toward everything it says to you. Or you can make it more critical, like. And so I think over time the models are going to be personalized based on your individual preferences. Right now they behave differently. Like the formats are a little different, the outputs are a little different. But again, it has way more to do with decisions made by humans in the labs that tell them how to behave versus did they train on different data sets that make them kind of come out of the box different. It's way more about human choice that goes into how these things function when they're out in the real world.
19:14
Got it. Okay, number seven. Will there ever be a way to definitively identify AI created videos? What if someone makes a video of me doing something illegal or includes misinformation about my business? How can I protect myself or my clients?
20:08
Yeah, so this is two good questions packaged as one here. So the first is identifying ad created videos. No universal standard at the moment. It would require all the labs working together and the industry coming together to establish a standard that enables any platform to know a video is generated by AI. So in essence, like picture a universal watermark. So whether it appears on Instagram or TikTok or YouTube or X or wherever that social Platform can instantly know AI was created because it has this universal identifier. What's happening right now is each lab has an identifier. So there's watermarks in veo that Google DeepMind knows that Veo was used to create it. Same would go for Sora. Like OpenAI knows anything that was used to create Sora. They can do the same thing, by the way, with text and images and anything. The individual labs can put those things in there. It's just that we don't have a standard across the industry that everybody agrees to and I don't see that happening anytime soon. I would love it. That would be an incredible step forward. I just don't see these labs coordinating to make that happen. What we need is the social media platforms at minimum to recognize and disclose the different markers from the different labs at minimum would be a good step. So the second one about what do you do if someone makes a video of you or of your kids or your CEO, your board? People are going to like this answer. Like you screwed. Like this is a fundamental problem. And we've known this was coming for years. So we talked about this in our 2022 book about the impact deepfakes and how you actually had to build this into your crisis communications plans as a company of what happens if our CEO gets deep fake doing something, saying something they didn't do. It spreads for two hours and then we get it taken down. But like we all know how social media works. It's already out there. So this is something you do need to be planning for. I would say as you're going to 2026, your crisis communications team has to be dealing with this right now. You have to have a plan for what happens. Who do you call the different platforms that all this needs to be listed. So if you've never built a crisis communications plan, you basically game plan scenarios of what could go wrong. You assign probabilities to that going wrong and then you look at the potential impact of if that event happens. And then what you do is you layer in what do we do if it happens? Who do we call? Who's the email? Who's the point of contact? How do we do it? How do we inform the board? All of that happens in the crisis comms plan. I used to do this for a living. Like the crisis comms was like early in my career. So that has to live in your 2026 planning docs. And if your team or your company isn't thinking about that, you got to go think about it now. What do you do in your personal life? Different story. It's one of those things, like I just, I wait for the moment. Like someone we know has to deal with this because I know it's coming. We'll talk about an example of this next week for a famous scientist who's being deep faked and he's like, what? What the hell do I do? And he was like tagging everybody, OpenAI Google YouTube, like in his tweet, saying, I'm being deep faked, saying things I never said. And now I'm getting all this blowback from stuff that I didn't do. How do I stop this? So this is a very real thing. It's going to be a major problem in 2026.
20:23
Yeah. Okay, number eight, where do you decide where the human stays front facing, like the podcast or webinars? Do you think the end user will drive our decision? Is it trust? Is it a desire for human connection?
23:39
Yeah, so I did. I don't think we put this one online. We should put this online, Kathy. My opening keynote for the AI for Writers summit this year, we should put that on YouTube, if we haven't already. So this is where we dealt with this. Like, I basically went through and said, just because AI can write doesn't mean it should. And like, how do you know when to just write yourself versus let the AI do the thing? And for me, like, the biggest factor comes down to authenticity, like expectation of authenticity. So when I'm sitting here answering these questions, you don't want to know. I just had ChatGPT bullet point these things out for me like that, that would not be authentic in any way. And it's like, well, I could do that myself. I think the people, the, the reason people listen to the podcast is because it is authentic and unscripted like this, this is literally like on the fly. Kathy's asking me questions that I don't even know what they're going to be. And I'm Responding based on 14 years of studying artificial intelligence and meeting with thousands of. Exactly. That's what we're trying to bring. And if it wasn't that, if it was literally just someone who decided to try and become an AI influencer and they're just asking Gemini and Chat GPT like all these questions and then they're summarized, then that, that falls flat in a second. If you get them off of where they're coming. You can't talk about a script. So I always say, like, to have confidence in the material, you have to be able to stand there and answer unscripted questions for like, 10, 20, 30 minutes. If you can do that, then you actually have domain expertise. You actually have confidence in what you've done and the experience you've gained. So I do think that that's what matters is whether it's your podcast webinars, whether it's like, articles you write, posts you put on social media. You have. If you want authenticity, if you want to establish thought leadership, it cannot be the words of an AI assistant. Anyone can do that. And then the desire for human connection. That's why I think things like this are so important. I think in person events, we're very, very bullish on in person events. You can't fake those things. And I think that more and more, and we see this ourselves with our, you know, our community and our mekon event that we just had a few weeks ago, that human connection is becoming more and more critical. Like, that was a thing we heard probably more than anything at that event is just how meaningful it was for people to be with other people who are all trying to figure this out. And you cannot simulate that on a zoom call. You can't simulate it on, you know, in a Slack channel. Like, you can initiate it, but there's nothing like human connection. So, yeah, I think that's going to be just like, absolutely essential, and you should be thinking strategically about it going into next year. How do we amplify human connection and how do we ensure authenticity comes through in the content we're creating?
23:54
There was a podcast I listened to a few months ago. They were taking all the AI news, and I'm fairly certain they're taking all the AI news from this podcast and they were synthesizing it and making an AI powered podcast. And I was like, interesting. So for a minute I was like, oh, is this going to hurt us at all? And I listened and I was just like, oh, we're good. You know, it's just such a different experience hearing it from someone. Explain this to us, talk about it, help us understand it, put it in words we understand versus just like, here's your news. And this is like the most boring thing I've ever heard.
26:43
Yeah.
27:16
So I think we want a little. I mean, not that this is like, super entertaining, but there is a little element of that that brings this podcast to life.
27:17
Yeah. And I think, like, I'm a huge believer in being imperfect. I don't know if that's the right way to say this. So, like, I don't know if people understand this, but, like, our Podcast. This is episode 177, Kathy. We do almost zero edits like Mike and I have probably three times now. I can think of. In all the weekly episodes Mike and I have ever done, have we actually stopped to pause and like, take a break? And it was usually due to coughing fits. We. We don't like, do a segment and like, oh, we didn't do that well enough. Let's go back and ask those two questions again. Everything we do is first take and then we turn it over to Claire and Kathy and they create the product in. In a matter of 24 hours. It goes from us recording it to. It is live actually less than 24 hours.
27:25
24, yeah.
28:14
So it's completely authentic. And I actually like, I've told the team specifically, don't take out the imperfections like that. That is actually what makes it human. And so I like the. The fact that like, sometimes I actually don't know the answer. And I'll tell you point blank, I don't know the answer. Now, it puts a lot of pressure on you to not say something stupid. But I also feel like even that it's like if I make a mistake or if I like, you know, have to go back and correct something which I can't actually think of, like examples. I've had to go back and say that we're very, very thorough in checking all of our sources and we're going to cite something, we're going to double check the data points. Like we approach it as journalists would, basically. And I think that's what make makes the weekly kind of unique is like. It is this very imperfect human approach to everything that is augmented by our ability to use AI to do it as streamlined as possible from a planning and production and promotion standpoint. But the humans show up and do it every week. Yeah.
28:14
There actually have been times when you've said like a sentence and I was like, ooh, that could be taken out of context, but we didn't do anything with it.
29:09
Yeah. Yeah. So, yeah, I don't know.
29:15
Okay, number nine, what books do you recommend reading to learn more about generative AI?
29:18
Yeah, so there's. I know we. I think I mentioned this. This might have come up yesterday too, in the AI Academy courses. I actually have a course where I talk about all, like, my favorite books when it comes to this stuff. There are some that actually predate generative AI, like prediction machines is amazing. The Algorithmic Leader by Mike Walsh is amazing. Those are written in like 2017-2019 range. Ethan Malik, cointelligence is more of a modern day. Genius Makers by Cade Metz came out before generative AI, but it helps you understand kind of how we got to the generative AI phase. We had those questions about environmental issues and things like that. If you want to understand geopolitics and environment. Empire of AI with Karen Howe was incredible. The one I think I mentioned yesterday is Jeff woods who actually did an AI Driven Leader keynote for us. That's the name of the book. AI Driven Leader is really good. So I read that one actually just.
29:25
Because I felt like he's coming to the event. I just want to be prepared. That's really good.
30:23
Is really good. Yeah. And then our book Marketing Artificial Intelligence came out in 2022, but we foreshadowed all of this happening. So there's actually a section where it says like, what happens when AI can write like humans? Like we knew generative AI was emerging, that could was going to be able to do the things that we then got when we got ChatGPT later that year. So that's still a very relevant book as well.
30:27
Nice. Number 10. My organization is focused on what not to do with AI, but I think we should also communicate what to do. How do you think about that balance and how should leaders frame it?
30:52
Yeah, if they're not thinking about what to do, they're going to be obsoleted. So it's not even just like nice to have. I think of it as a business imperative. So depending on what industry you're in, if you're not doing this stuff and other people are, if it's a SaaS company, you're cooked. Like it's game over. SaaS companies have to have been doing this for the last two years. If you're in manufacturing or maybe pockets of healthcare or law or professional services, like there's a chance you've gotten by to this point not having figured this stuff out. But it's not going to be long now until everybody else starts to figure this out. And so you either have the opportunity to get out ahead of this and be that like AI emergent company that has the opportunity to actually accelerate growth, or you're just going to eventually kind of fade into obsolescence. So, you know, I think about. And then even from there, like the adoption, there's a couple layers. Like my workshop at Macon this year was about AI innovation. And the main takeaway there was optimization is 10% thinking. Innovation is 10x thinking. And what I meant by that is it's going to be table stakes to use AI to Optimize efficiency and productivity. Like, everybody is going to be doing that. 10% is honestly probably a low bar of what you can gain in efficiency or productivity. But, like, let's just say you're thinking about, let's incrementally improve the things we're already doing. Let's use AI to kind of level those things up. That'll work for a little while. But if you have competitors, people in your industry who are looking this as saying, yeah, but how do we reimagine the entire thing? Like, how do we change the pricing model completely? How do we bring new services to market, go into new markets? Those are the people who have the 10x thinking. Like, they're looking at dramatic transformation. And if you're up against a company that's doing that and has the will and vision to execute it, you're done. Like, and so that's how I think about our business is like, we're basically a media company, like, first and foremost, probably as the foundation, like, build an audience and then create value for those people. So we, we do things like the podcast and the newsletter. We are an event business. We have Macon and our virtual events, but we actually run probably like dozens of events throughout the year, depending on, you know, which things you throw in that category. And then we're an education business, and the education largely comes through our academy. And I spend every day thinking about what is a smarter version of all of those business models. Research would probably, like, get thrown into the content side as well. And I literally think about, how do we just disrupt the entire industry, all of those, like, what is just a different version? And not because I have anything against any of the companies that exist within those industries. There's a bunch of good companies. I have friends running companies in those industries. I just. What motivates me is to say, how do I do something different and better? Um, not because I really care about the competition or, like, want to beat any particular company. It's just like that. Otherwise, I don't want to get out of bed. Like, if. If I'm not trying to, like, completely reimagine something, I just lose motivation. So that's kind of how I think about it, is like, you got to get to the optimization phase if you haven't yet as a company, but you have to quickly also be thinking about and transitioning into the innovation phase, which truly drives transformation.
31:03
Yep. Yeah. So it's more about, like, guiding innovation, less about policing the risk of everything.
34:10
Yeah.
34:15
Okay. Number 11, as a director of learning and development who is doing AI and L and D, right?
34:17
Yeah. So this, this one comes up. I often cite Moderna. I think I mentioned that on the intro call. You know, it was a question I think we might have talked a little bit about. They did a really good job. It's a great case study we featured in our courses. You can go online and actually, you know, OpenAI, maybe we'll drop that link in there. OpenAI had a case study on Moderna pretty early on. We've seen organizations like Cleveland Clinic as an example. You know, they. That's just someone like we've worked with. So I can speak directly on theirs where, you know, they. They looked at it from a leadership perspective, they look at it from a practitioner perspective, and they really started thinking about how do we drive that within. HubSpot does a great job with this. Baptist Health is a company we've mentioned from a healthcare perspective that does some really cool things. There's actually some on our Academy SmartRx AI site. We do have some logos there. Most of the big companies that we work with, we can't disclose who they are. But I will say categorically, we. The way the best companies are doing it is they are infusing it into their existing programs, especially at the larger enterprise level. They're looking at it as complementary to any programs they currently have. And then they're building specific AI curriculum. And so sometimes we will work with them where they basically plug our AI Academy into that to. To immediately level up. And then they'll complement it with other learning platforms like LinkedIn learning and Coursera and Google and OpenAI and all these people have great courses. And so that's kind of how we teach it, is we build learning journeys through our academy that's specifically designed for people to drive personal and business transformation. But we actually believe deeply in the value of a lot of the other ecosystem, a lot of their content being created. So, yeah, those are, you know, I don't know. I don't know if I gave like great examples of case studies to go look at and things, but the companies that are doing it, I've generally found, aren't talking publicly about what they're doing.
34:26
Right.
36:25
Because it is a pretty distinct competitive advantage at the moment. McDonald's, you know, Michelle Gansley, former chief officer at McDonald's, did a talk on how McDonald's was doing this. So that might be another one to go look at. That's public knowledge. But yeah, it's early and there's very few who have done it really, really well. And are willing to talk about the fact that they've done it really well.
36:25
All right, number 12, I love this question. And we, I didn't even see this yesterday when we were going through, um, is there an AI concept for retirees that could help manage issues like healthcare decisions or transfer of wealth? That's 40 million people who could benefit.
36:47
Uh, I mean, you could come up with a really cool system prompt to do this with a GPT, but is there like a company that has done this yet? Um, I'm not aware of it. I now I'd have to deep dive in. We have CB Insights is like a platform we use to do market analysis. I would have to go in and run like a healthcare industry search and see if we could find something like this. I would imagine, you know, AARP organizations like that might be doing this kind of research already or building this kind of tech. But I love this kind of thinking because this is basically what we always teach is once you understand AI and what it's capable of and what it will be capable of in the, you know, coming months and years, you start to look at every problem differently. And so this is an example of that. Like, I've, I think I talked on the podcast a while back of how insanely complex it is to get medication, some types of medication. So you know, a family member who has a medication that the supply is very low. And so you literally have to call around to like five different pharmacies to try and get this medication that they won't give you three month prescriptions of. So every 20 days you have to bounce around between five facilities. And then if you get one, we say we can get it in next week. It's like, okay, put in the order. Then you call another one. They say, we can't fill that because you have an order at another pharmacy. Then you got to. I'm like, how is this in 2025, the way we do pharmacy? And I consider us privileged in our ability to solve this and we have the resources to where the finances isn't even the issue. So we are like already in an advantage and I still can't do this. And I think, what do people do who don't have those privileges that we have? And so that's something I've thought a lot about. It's like, how do I solve that? Or like, Mark Cuban's worried about this one. Like, maybe I'll just let Mark Cuban do it. But these are the exact kinds of things where ideas are born. When you look at problems differently and so I really like this question. I hope whoever asked it actually, like, you know, thinks more about this and maybe tries to find some people who can work on this thing.
37:03
Is this something that you could throw into like a problems GPT or something and you. Or you could just start asking one of the tools some questions on how to get started?
39:13
Yeah, like, this is kind of like when I had the issue of I didn't think parents understood AI enough and the dangers of it. So I built Kids SafeGPT, you know, which basically was for parents to better, better understand risks and talk to their kids about those risks. I would imagine you could probably build a GPT in an afternoon that would do something similar where you just went in and gave it the prompt of, you know, I'm trying to help seniors who maybe, you know, don't fully understand the best ways to do this, how to make health care decisions that are, you can't function as a doctor, but you can provide medical guidance that they can ask their doctor. Like, you just write the system prompt and then you build it. Transfer of wealth for sure. Like, I'm planning my trust. Like, yeah, I, I could if I had three hours. Yes, you, we could design a GPT to probably build a minimum viable product of this in an afternoon.
39:21
Well, I'm going to call my friend Michael at AARP and tell him that.
40:10
Yeah, if anybody wants to, like, I'm gonna say call me, but I'm like, I'm. So I read this tweet this morning. I think I. Sorry, this is a total sidetrack now, but I. People might find this interesting. So there was a tweet from a guy who just left xai. So I guess that's the connection here is. It's a top researcher at XAI that worked for Elon Musk, and he talks about a common mistake companies make is not allowing engineers to have enough freedom and time. And then he, like, followed up with another tweet where he said, the best work comes from focusing on one problem at a time and nothing else. And so I tweeted that and I said, I feel this in my soul. Constant struggle to live it. So that is the curse of, like, when you look at every business and you look at every problem and say, oh, yeah, I could solve that. Like, it would just take me three hours. And if, like, you spend your life constantly looking around and seeing all these problems and not like saying, okay, but for the next three months, I am explicitly solving customer experience racing. And I have throughout my career of 25 years I have always struggled with this. Like, I have too many ideas and I, I often can't just lock in and do something. I actually heard an interview with Jeff Bezos where he was talking about this recently where he just like some advisor told him, like, you have enough ideas to kill this company. Like you, you just always bring this thing and then the team can't like focus. So I was gonna say, like, reach out and I'll help you.
40:13
Call me in January.
41:37
Yeah, reach out. But like, if I say no, don't be offended because like, I'm trying very, very hard to focus on like a few key problems at a time.
41:37
We appreciate that. Okay, last question. Number 13. It's estimated Spotify has 100 million songs and 75 million are AI generated. I can't, I cannot confirm or deny that math. Someone, Someone. This was someone else's question. Should Spotify or other streaming platforms flag this content as AI?
41:46
Okay, yeah. So let's just categorically say there is a lot of AI generated music that is emerging. It's rumored OpenAI is working on a competitor to like Suno Udio, where they're going to build their own music. Each platform has to have its own policies about whether or not they allow AI generated music or they have to designate if it's AI generated. My personal opinion on stuff like whether it's an AI generated video or an image that could be misconstrued as being fully humanly authentic is that it should. You should be able to know that. I've seen that on some of the social platforms where it'll kind of indicate if it is a bigger question. Here is just around the evolution of what is considered music and what is considered entertainment. And you know, these platforms are going to give people what they want and they're willing to pay for. I saw, I think it was Suno has like, they recently said they had like a hundred million in revenue or 150 million in annual revenue and somebody was like, for what? Like, who is paying for this stuff? Like these AI generated songs? And so sometimes I just feel like, and I've said this before on the podcast, like I don't know that I have the best taste when it comes to what's going to work for the broader consumer market when it comes to this stuff. Like Sora too to me is a ridiculous product and I don't know why people would spend a ton of time on that platform. But that is one unique perspective of a middle aged male. Like, that's just me as a dad and like I look at this stuff as, like, I don't get it. Like, we waste our time on enough things. I don't need an AI generated slot machine to like, take more of my time. That doesn't mean that a whole bunch of other people find it fascinating and it's a nice distraction for them and, like, they enjoy it.
42:07
Yeah.
43:54
And so again, like, I'm. I try and be as objective as possible when it comes to these things, and I accept that there may be a whole bunch of people who love this idea of like, on demand music and they make it sound like whatever they want and to them that's creativity. And I'm not going to judge that. Like, I just, I try and sort of just observe what's going on. So my personal opinion about AI generated music and is it good or is it bad? Like, it's kind of irrelevant what I personally think. It's just look at the numbers and say, well, is there demand for it? And if there is, then somebody's going to build it and they're going to keep serving it up. So I don't know. In the end, the data will tell the story and how people react to this stuff will sort of drive whether the platform's designated AI content or not. I don't know. If they decide it's less people stick around and listen to it, then they might not show the AI generated thing.
43:54
Well, that's what I was thinking. It's one thing to say I'm going to tinker with this tool and I'm going to use this to make something. It's all, it's altogether different to say, like, I actually am enjoying listening to it as a, as a consumer.
44:41
Right. Yeah. And again, there's so many layers to, to this. Like, I will say I've like, personally been fascinated recently by these, these clips I've seen where they're turning like hip hop songs into like 50s 60s jazz and blues music. It's like infinitely interesting to me. And I know it's all generated, obviously, but it's such a wild way to like, listen to the song in a totally different way. Like an Eminem song to like a 50 blues track. It's just wild to hear. So again, like, I'm, I'm kind of like talking out of both sides of my mouth. You're like, I actually find some of this stuff super interesting.
44:53
Would you pay for it?
45:33
I, I don't, I, I highly doubt that, but I don't know. And I think this is kind of the exciting part is like, we just don't know where this stuff goes. And you know, I don't know. I think there's. There's so much to be learned and observed and I was shocked to see the 150 million revenue number. I kind of had the same reaction. What like. But yeah, I don't know. It's interesting stuff. But I. Long story short, I do think that stuff like that should be indicated as AI for now, but that may evolve over time.
45:34
Sure. And also if you are a Mastery member listening Our Gen AI app series this Friday is actually on Suno Claire did an amazing video that's dropping tomorrow.
46:06
And also on that track, if you are or are not an Academy member, we have a blog now that actually posts anytime we have new content available to Academy members. And so it's a great way to keep up on all the Gen app reviews that are coming out, all the new courses and certificates and things like that. So we'll put the link in the show notes but I think it's just Academy SmartRx AI blog if I'm not mistaken. That was a lot of words. But we'll put the link in the show notes.
46:17
I'm going to ask you one more question before we sign off. Do you have any moments from Macon 2025 that you want to that you've been thinking about over the past few weeks?
46:44
Yeah. So I mean Mekon, if people aren't familiar, we started in 2019, it was 300 people. This year was 1500. In the process we almost lost everything. Like, you know, when Covid shut down the event business, we went to nothing as a business. And then me personally, I'd bet everything my entire financial well being on AI, working and being a thing people cared about pre chatgpt and so for me, I think like so much of it is just gratitude of being there and seeing all the hard work that the entire team put in all these years and then like being with the people that we don't get to see all year round that we maybe hear from on LinkedIn or get some emails or UC and Slack channel. I don't spend much time in Slack, but we don't get to hear their stories enough. And so for me it's just those like three or four days of being together with all these people that feels like this massive extended family in a way because everyone's so cool and supportive of each other and empathetic of where people are at. And so just to be there and hear these insane stories. I mean we had people from what, 19 countries this year.
46:55
Me too.
48:01
Yeah. And like, to hear these stories of people who listen to the podcast every week in New Zealand or Japan or, like, wherever in. Or people who, like, took this leap and left their safe career at a corporation because they weren't AI forward enough, and they went and did something else and they were terrified, but it worked. And now they're in this, like, amazing place. Like, that's the thing I love. Like, it's just being together with all those people. I don't know. What about you? I mean, you. You get in 2019.
48:01
I came in 2019 with Joe Polizzi as a paid attendee, and I remember sitting through Keith's session and Katie Roeber's session that year, and I left, I'm like, that's cool. And I went back to work because I just didn't know how to do, how to implement a lot of things they were saying. And, like, how do I find the time? How do I find the resources? And then this year, I'm like, oh, my gosh, there's so many things I can go do now. There's so many more stories, and the stories are getting very broad. You know, there's very, very basic ones. And there are some folks, Lisa Adams comes to mind, who are like, there. She's so far ahead. So many of our attendees are so far ahead. So seeing that all those changes is pretty remarkable.
48:26
Yeah. And I mean, even, like, finding speakers in 2019 was an insanely difficult process, and I owned that process largely until probably 2023. I focus on the main stage now. But back in 2019, and then we came back in person in 2022, there weren't that many people that actually were doing interesting things. And. And I think so many people would come to us and, like, wait for the answers from us. And then once ChatGPT emerged and everyone could actually get in and start using this. Yeah, this, like, it just exploded where all these interesting people were doing really fascinating things and sort of pushing the frontiers. And so now it's hard to narrow down the field every year. Like, we only get to have 50 some speakers or whatever it is, but we have hundreds of submissions, plus people we track all year round doing interesting things to the point where now we learn, I think, like, hopefully more than people learn from us. Like, we still do our best to stay on the frontiers and teach everything we can. But, yeah, there's so many speakers doing things. You're like, oh, my God, I never even thought to do that. So, yeah, I love the level of, like, the quality of speakers, the quality of the sessions, because you really can walk into any room and learn something new.
49:03
Yeah. And on that note, if you are interested, Macon 2025 on demand is available right now. You can sign up and get immediate access to those 20 sessions. And Macon 2026 is open for registration. So if you are listening to this on Thursday, tomorrow is our last day of our very, very early pricing on October 31st.
50:18
So if you're interested, like $800, $900. Oh, my God.
50:38
Okay, so do it now.
50:42
Get that price raised fast.
50:43
So do it now while you can. And I think that's it.
50:46
All right. Yeah. And that's just Macon AI M A I C O N AI. Both the Reg and the On Demand are right at the top of the page. Check those out. All right, Kathy, thanks. And Claire, thanks for curating questions for us as always. And we will be back with episode 178. Right. The Weekly. I figure out when we're recording that because I have talks next week in San Diego. And where am I at? San Diego And Miami.
50:49
San Diego. And Orlando.
51:13
Orlando. And it was in Florida somewhere.
51:14
All right, next Intro to AI class is December 3rd. Next scaling AI class is November 14th. We would love to see you and your teammates there. And we'll see you next time.
51:18
Paul, thank you.
51:26
Thank you.
51:27
Thanks for listening to AI answers. To keep learning, visit SmarterX AI, where you'll find on demand courses, upcoming classes, and practical resources to guide your AI journey. And if you've got a question for a future episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions about AI.
51:29