#175: AI Answers - AI for 10X Innovation, Rethinking GTM, Dangers of Progress at All Costs, Autonomous Marketing, How to Keep Up with AI, and Future of Web Traffic
This episode of AI Answers was recorded live at Macon AI conference, featuring Paul Raetzer and Mike Kaput answering audience questions about AI's impact on business and marketing. The discussion covered topics ranging from AI-first marketing strategies and autonomous marketing capabilities to the challenges of keeping up with rapid AI development and implementing AI at scale in organizations.
- Companies that focus only on AI efficiency will see workforce reduction, while those pursuing AI-driven growth and innovation will create new opportunities and avoid job losses
- The shift from human-to-human to agent-to-agent commerce is coming, requiring B2B companies to rethink their go-to-market strategies and website optimization for AI consumption
- AI labs are moving too fast for society to adapt, creating challenges for parents, enterprises, and regulatory frameworks to keep up with technological capabilities
- True organizational AI scaling requires CEO-level commitment and company-wide mandates, not just departmental pilots or experiments
- The reasoning capabilities introduced by models like OpenAI's O1 represent a fundamental shift in AI capabilities that remains largely underutilized by business professionals
"So as organizations start to really realize what AI is capable of, the first thing they're going to do is look at how do we do things faster, better, cheaper, and then how do we do more of it faster, better, cheaper."
"Optimization is 10% thinking. Innovation is 10x thinking."
"If your growth as a company or if you work at a company that is flat or single digit growth, you will have fewer people working there like that."
"I feel like the ground is moving beneath my feet every day. Like we just can't keep up with our own technologies and innovations within the labs themselves."
"When the CEO puts out the memo saying they're ready to be AI forward."
So as organizations start to really realize what AI is capable of, the first thing they're going to do is look at how do we do things faster, better, cheaper, and then how do we do more of it faster, better, cheaper.
0:00
Welcome to AI Answers, a special Q and A series from the Artificial Intelligence Show. I'm Paul Raetzer, founder and CEO of SmartRx and marketing AI institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of AI. But we never have enough time to get to all of them.
0:11
So we created the AI Answers series.
0:34
To address more of these questions and share real time insights into the topics and challenges professionals like you are facing. Whether you're just starting your AI journey or already putting it to work in your organization, these are the practical insights, use cases and strategies you need to grow smarter. Let's explore AI together.
0:36
Welcome to episode 175 of the Artificial Intelligence Show. I'm your host Paul Raetzer, along with my co host Mike Kaput. We are doing the seventh episode of our AI Answer series which is presented by Google Cloud. This is a series usually based on questions from our monthly Intro to AI and Scaling AI classes along with some of our virtual events. But this is a special one, which is why Mike is here, not Kathy. We are live at Macon, so you're going to be listening to this about a week post Macon. It's gonna come out the Thursday after Macon, but we are right in the middle of it, which you might be able to tell from my voice. I don't know, in my head, my voice sounds quite deep at the moment. So I have my ultimate podcast slash radio voice going with one session to go. I still have the closing keynote with Dr. Brian Keating, but yeah, just, you know, if you're here, it's been an incredible experience at Macon. Mike's fresh off the stage this morning with Alex Kantrowich, which was just like an incredible interview. The big technology podcast is a must listen for people if you don't have that on your list already. And then PJ Ace just brought the house down with this Rise of the AI Filmmaker session this morning that I just can't stop thinking about. Yeah, I mean, if I had paid to be here, if I didn't run the conference myself, if I'd have paid to be here, that talk alone would have been worth the price of admission. So, yeah, just thanks to Google. Google Cloud was a great sponsor of Macon as well, so we appreciate Everything they've done for us, you know, both with the podcast series, the, you know, AI Literacy project overall, and then as a part of Macon Boost Bytes, we keep. We've been talking about a lot about that lately. So you can go to cloud.google.com learn more about Google Cloud, but check out their AI Boost Bytes. We'll put that in the show notes. Just these really short training videos that are great, helps to level up AI skills and capabilities and gets into some, like, fun features of Google's technology. All right, so the format today is much like a lot of Macon. I have no idea what we're doing. A lot of times we have this incredible team that just makes this event happen. It's world class and I can't take any credit for that nature of it. We just have this amazing event team that just brings everything together and it's so incredible. But the beauty for me is I. I don't need to be caught up in all the details. I focus on the big stuff. And so I would throw this podcast episode into that. I was told we were doing it. I was told we had a podcast booth on the show floor. And then Mike and I were going to do a live recording of the podcast. And I am here and I understand there are going to be some questions asked and that's about the extent of my knowledge coming into this. And I'm in voice preservation mode, so I might talk a little quieter than normal. But, Mike, I'm going to turn over to you, give me a little more background on what we're doing here today and lead us into it.
1:01
Sounds good, Paul. So this is actually, I think my first AI answers subbing in for Kathy over here because, you know, but and.
3:46
Also as I'm sitting across here, this is the first time you and I are in the same room doing a podcast.
3:52
Yeah, it's really. Yeah, we see each other in person all the time, but for the podcast, never. And I'm like, oh, this is how some other podcasts do it all the time. This feels the new experience.
3:56
Yeah, Preview.
4:05
We are so the. We're expanding our office soon and we are building a podcast studio in the office in downtown Cleveland. So theoretically, going into 2026, Mike and I might be in the same room more often doing this.
4:06
Well, we'll test drive it right now. So, yeah, Paul, the idea here is our team has been soliciting questions from the audience from people at Macon, both online and in person, and our team has curated this great set of questions. I went through it and Kind of further curated it. So we've got a handful of questions here about AI, future of marketing, future of business that we're kind of going to run through. And we have thankfully permission from everybody to use their names. I'll say the first name of the person that asked the question. Hopefully a big shout out to the audience. Thank you all for. These are. Honestly, like I was going through his question, I was like, yeah, these are really solid.
4:19
So quick note, because the marketing team will give me a hard time if I don't mention this. So Macon AI, if you have no idea we're talking about, you're new to the podcast, just hearing about Macon for the first time and you've got FOMO that you weren't here. Macon AI. And there is On Demand for the mainstay session. So by the time you're listening to this, I'm guessing the On Demand will actually probably be ready to go if you want to go check that out.
4:55
Excellent. So let's dive in. Our first question is from Omer and he says people talk a lot about being AI first in marketing, yet all the focus seems to be on improving internal processes, being like, more efficient and creating content. I think AI can and should be used to create new experiences for customers. Would love to get your thoughts on this, this kind of balance between efficiency versus actually thinking about the future.
5:17
Yeah, so this was actually the basis for my workshop on day one and then my opening keynote on day two. And it was that everybody's the table stakes is going to be efficiency and productivity. So as organizations start to really realize what AI is capable of, the first thing they're going to do is look at how do we do things faster, better, cheaper, and then how do we do more of it faster, better, cheaper. And my argument is that the companies that are really going to differentiate themselves are going to look more at growth and innovation. How do we do new things? And so sort of like that, you know, the slide you create that you think, oh, this might be the one people take pictures of. Like, as a speaker, I had a slide that optimization is 10% thinking. Innovation is 10x thinking. And so that's the mindset I wanted people to get into, is it's okay to think about optimization. We should be doing that. Every company should be focused on how do we optimize better, faster, cheaper with what we're already doing. But the bigger opportunity and the way we, we, we avoid workforce reduction is we drive growth. We, we build the companies faster into new markets, new products, reimagine Business models. And so I think this is a great question because it means he's in the right mindset, starting to really think about what's going to be possible, not just how do we make something a little bit better, a little bit faster, a little bit cheaper.
5:42
Yeah, that jumped out to me too is like that's the way to square the circle. Right. As the companies that just prioritize efficiency, that's where you're going to see job loss, where we have to challenge ourselves to be more innovative and actually grow, right?
6:54
Yeah. If your growth as a company or if you work at a company that is flat or single digit growth, you will have fewer people working there like that. And I don't even, I know it's like still somewhat debatable in some circles. I don't see how it's a debatable thing. Like companies with flat growth will have fewer humans there. You won't need as many. It's not that the AI can do everyone's jobs, it's just it can make the people who are there so much more efficient. You just need fewer people doing the same thing. And so if you're not creating more demand that grows the company, you just don't need as many people doing the work. So growth and innovation to me is the only way through this in a positive way for the economy and for you individually, your careers and your companies.
7:05
All right, our next question is from Yarin. And I apologize in advance if I'm pronouncing his name wrong.
7:49
We don't get the phonetics.
7:53
I don't get the phonetics. Yeah. So Yaren says as websites are increasingly serving dual audiences, both human buyers and the AI models that are interpreting and researching these sites, how should B2B companies specifically be rethinking how they go to market and their general playbooks?
7:54
We had some really good sessions on this. So Jeremiah AU Yang had a really good mainstage session where he was talking about this shift where it's like agent to agent, communications agent, agent commerce, where as a B2B market or B2C market, doesn't matter. Your website might not be a place people actually go to. And even if there's still traffic coming to your website, there's a decent chance it's actually someone's AI agent in 6 to 512 month that's coming and not the human. So there's going to be a complete shift in that stuff. And then we had other sessions related to, what are people calling it? A, E, O or whatever.
8:13
Oh, right, yeah. Like, so AI. AI Search Optimization. It's either AEO or geo. I've seen too, because, like generative. So I don't know what the term instead of.
8:46
And again, if you're not a marketer, this, you're like, what the hell are you guys talking about with any of these acronyms? So search Engine optimization or SEO has been the thing for. Since Google was created, basically since 2000. In essence, how do we get our websites found? How do we show up in search results, be one of the top, you know, blue links? And now we're into this moment where people are trying to figure out, well, how do we get chatgpt to surface us in conversations or Perplexity or Gemini? And so there's a lot of emerging practices around this, of what are the best practices? And then I feel like it's a. It's a moving target because we're all just kind of guessing. We're just starting to find tools that are monitoring how people are showing up in these things. And then you're just kind of trying to figure it out because everybody's working differently. Like, everybody's got a different search index. So, you know, Google at least is a known entity here. We know they're really good at search, but everybody's trying to play this game, whether it's Perplexity or OpenAI. Like, they're all trying to build their own browsers and find, you know, build their own search indexes. So that is definitely an open area of research and of exploration and experimentation. Our strategy, as I've said a number of times on the podcast, is just be where our audiences are. Like, I don't like when I talk to Mike and Kathy from a marketing perspective, I don't really ever ask them about organic traffic. I just kind of assume it's going to go to zero, that we're just not going to get the people to the site the same way we used to. And I know that diversity of content, of original thinking, not just like, let's go have Claude write a bunch of articles for us. But Mike, let's do the podcast once or twice a week. It's original thought. It turns into a transcript, which turns into videos on YouTube, which turns into podcast distribution. And we're just kind of playing the long game and saying if we do the right thing and we solve for the audience and we create value and we put it in all these places where they may find their information, good things happen. And I know that a lot of companies can't take that same approach. And you do have to live and die by the KPI. And. But some of this comes down to education, like, you know, giving yourself the grace to realize, like, we're trying to figure this out and I need my executives to understand that. And if you need to go play this clip for like, clip this part of the podcast and say, I don't know, like, if there's any authority we have, like, here, use it if you can.
8:55
I love that. All right, next question is from Jason, Jason Cabrera. Big shout out to Jason. Great guy, had some great conversations, very active community member, huge community member. So, Jason, thank you for all you do. Jason had a few questions kind of specifically around some qualities of like the AI arms race. So I kind of synthesized these for purposes of, you know, being concise here just because we can't ask all of them. But I'm going to try to get at the idea of what he was getting at, which is basically like, what are the consequences in your mind of the progress at all costs mindset that the AI labs have. They have this inescapable pressure to ship new technology regardless of consequences, which we've talked about before, like, what lessons should we be taking from this?
11:10
The challenge right now with, you know, OpenAI and meta and XAI and Anthropic and Google, you know, I think kind of I'm missing one of the labs there. Microsoft, I wouldn't throw in this bucket quite yet. They're just funding it for everybody else to do it. But their motivations are the pursuit of trillion dollar markets. Like, they are looking at this as a generational opportunity to create trillions of dollars in market cap value and be at the front of reimagining basically every industry. And so they have to move incredibly fast. As a, from a research perspective, they're obviously moving probably faster than they should from a productization perspective, bringing things to market that society isn't quite ready for yet. Like Sora 2 would be an example of that. They're competing directly with Google's VEO technology. They made an announcement yesterday. Again, yesterday in our world would be what day is October 15th. So Sam had tweeted something about a new Sora 2 capability. And then sure enough, Google DeepMind had released VO 3.1. And so it is literally just this like, okay, let's hold this until Google does something and then we'll drop this thing. And it's this constant battle for headlines and virality, you know, social media and market share and mind share. And to them, there are people in those companies who I think think deeply about the impact on society, but their voice is nowhere near as powerful as it was, you know, three years ago.
11:58
Yeah.
13:32
So an upcoming talk this afternoon is Irene Solomon, who was actually in public policy at OpenAI when they released GPT2. And she's the head of public policy, chief policy officer now at Hugging Face. And these are the kinds of things someone like Irene thinks about and works on. You have trust and safety teams that are basically looking at the release of technologies and saying, I don't think we're ready. And the product team says, yeah, we're going to go anyway. So the danger is that we just put technologies into society. Like erotica. We're going to talk a little bit about this with. We'll have talked about this by the time you listen to this. On the episode, the weekly episode 173 that Sam Altman tweeted like, hey, we're gonna push out some updated stuff to ChatGPT and it's gonna allow erotica for non teenagers.
13:33
I was gonna mention this. Yeah, that was perfect. Examples, like you're talking.
14:18
Yeah. And like they're just gonna do it and they're gonna kind of push the limits of society and those are the ramifications. But then it becomes very real. Like if I drill into like, yeah, as a parent. Okay, so OpenAI has somehow magically found a way to use an algorithm to determine that my daughter is not 18, therefore she ungated version of chatgpt. But it doesn't mean that they don't find a way to get to that version or that they don't have a friend who like figured it out and like, hey, look at what I'm able to do in ChatGPT. And so now as a parent I have to deal with the reality of like, okay, so my 13 year old, while it's not someone who would go seek these kinds of like darker sides of AI.
14:22
Yeah.
14:59
Certainly smart enough to like figure it out. Know that that kind of stuff is there. And you gotta like, now I gotta have a conversation with my daughter about like, hey, when you're in chat GPT, like here's A. And it's like most parents, 99.9 of parents, we have no idea to have that conversation with their kids. And like three years from now they're gonna be like, you were doing what with Chat GPT?
14:59
Like, like, right.
15:19
How was that? So that's the reality is we move too fast and society doesn't catch up. Enterprises don't catch up. Like we just, it's Too quick for everybody. And I feel like we had a closing keyNote on day two with Xiaoma from Google DeepMind and Angela Pham from Meta. And I think it was Xiao who actually said, you know, Kath Anderson, who was moderating it, asked her like, what does it feel like in an AI lab right now? And she's like, I feel like the ground is moving beneath my feet every day. Like we just can't keep up with our own technologies and innovations within the labs themselves.
15:20
You know, I'm not going to put a question in Jason's mouth here, but I do have a kind of just a quick follow up question of my own here. Do you see that this dynamic changing at all? It's market pressure. It seems almost inevitable. I hate to say it.
15:55
Yeah, it's faster. You heard it talking to Alex Cantrowicz this morning. Angela and Xiao both said it themselves. Like, it's not slowing down, it's accelerating well beyond again, what the labs themselves are capable of processing and understanding. And that's why, going back to Jason's original question, like the labs can't keep up with their own innovations and then like someone else takes an innovation and productizes it and sometimes the researchers have worked on it, like they don't even have any say in it or any influence in like where it goes.
16:07
Yeah. All right, a question from Abby here. How far away are we from quote unquote, autonomous marketing?
16:40
That's an interesting one. I think it depends on the complexity of the workflow and how long of a horizon it is, how many steps are needed to be completed. I think we're at the very early stages where you could string together some semi autonomous agents. Not like, hey, single agent, go do this thing for me. Human in loop is still essential to the vast majority of instances. So I don't know, let's try and make this really tangible, the production of this podcast. So Mike and I are going to record this thing and then it's going to get turned over to Claire and Kathy and the team to do what they do to like turn this thing around and produce it.
16:49
Right.
17:27
So there are, you know, in that process, I don't know, let's just say there's 15 to 20 steps of taking this, looking at the transcript, cleaning up the transcript, enhancing the audio, all these noise reduction, speaker identification, there's all these tasks that make up the production of each episode. And I don't know, like I'm again, I'm just going to kind of ballpark here and let's say 30 to 50% of those tests are probably heavily AI assisted right now, for sure. Yeah. Claire still oversees everything. I think Claire listens to the podcast herself at least once, if not twice every episode. So there's all these human things that still go into it, but we save probably 20 to 30 hours a week using AI. Now if I look at that process 12 months from now, do I think we will have removed Claire or the human from Loop? No way. Like, I think we will continue to get incremental improvements now and we will probably get to the point where you could now do another episode a week with the same level of human involvement as we had before. Yeah. And so I, I tend to think of, of marketing and workflows and campaigns and I just feel like we're heading in a direction where depending on what that is, you might be anywhere from 10 to 40 or 50% automated right now.
17:27
Yeah.
18:41
And in some you might be able to get up to say 80 to 90%. Like deep research as an example.
18:42
Yeah, yeah.
18:47
Go in, we run a research report, let's say we're staffed properly, with a former journalist who can verify resources, check citations, do editing of the AI generated stuff. Mike and I spend an hour doing the domain expertise review of the output and boom. You publish a deep research report in five to seven hours. That would have taken 50 to 70.
18:47
Right.
19:08
That, that's a 90% thing if it works.
19:09
Yeah.
19:12
So I don't know. I feel like this is why we talk a lot on the podcast about the need for custom evals. Like, don't worry about what the latest evaluation is of the latest model until you run it through. Like, here's the five things I do. How automated has it made me?
19:13
I also wonder too, when I think about, you know, how far away are we from autonomous marketing? Yeah, we could automate a lot of marketing. And then the question is, where is that time reallocated? Maybe we do more events. This is not automatable. Right. So maybe is it ever going to be fully autonomous?
19:27
Yeah. And I think it's like the. Again I've mentioned on the show before, I've had a Tesla now for seven years. I've had whatever form of their current self driving technology has existed for seven years. And I would say it has made a massive leap in the last six months. It is a very noticeably better self driving technology. And maybe they're 90% of the way there and it's taken them probably 10 years to get the 90%. Yeah, that last 10% might take another 10 years. I, I Don't know but oftentimes what happens in automation is you can get through that first 80 to 90% but it's sort of that last mile that is the hardest part to crack because you're dealing with all the anomalies, the irrationalities like all of that starts to come into play and that's really hard to solve for.
19:44
This one is from Christian. Which AI powered tactic has surprised you by under delivering? So maybe it's something in marketing you've tried to do knowledge work in general. How would you think about that? I don't know.
20:31
I mean this is categorically I would just say agents like I just think.
20:44
It'S a good idea.
20:48
Yeah, I think it's just so overhyped you know really starting fall of 2024 is when the big companies start started talking a lot about agents and marketing them as autonomous when they really weren't. And I think we're actually still pretty in the middle of that phase of it. But there's just a lot of hype. And don't get me wrong, agents are super practical but they're just way more human powered and deterministic at this point where humans are setting the rules and setting up the automations and doing the data connections and the tech companies aren't really that straight up about that. And so that's the part I think anything you try and apply agents to you're probably going to be underwhelmed with what it actually does.
20:49
Yeah and I'm personally super impressed with the automations alone people have made. But I think that line of like what's actually an agent versus just a really great automation it's hard for me to parse through. All right, this one's from Kelly. What is the most compelling early AI use case that you've seen that has helped leadership like finally get it that is you know enough to like greenlight investment or actually get buy in for AI.
21:33
I don't know. I mean again I'll kind of pick like a really tangible one for me which is the CO CEO GPT that I created in December of 24 I think is when I did that webinar and if you don't know what we're talking about, you can go on Smartr X and click on Tools and go to the co CEO page and the template prompt is right there. You can build your own. But I feel like things like that, the personalized GPT that assists in that specific person's job, that is often the way to do it. We had Jeff woods did a opening keynote right after mine on day two, and he shared some incredible examples of talking with leaders. And in one instance, he built an AI board for them where he actually trained it on feedback from the actual board members. And then they started submitting the board decks to the AI board in advance of the actual board meeting and had it function as the critic. That said, how are they going to ask us? What are they going to ask us about? What will the challenges to the deck be? And so for the leader of that company who wasn't AI native in any way, it was like, this is a super valuable use case. Like, if I get nothing else from AI, I get it now. And I think that's what it takes is these very personalized examples where the, the value is so obvious and so tangible that it's the best way to get them to say, okay, let's go find five more of these things.
22:01
Yeah. And this also, I think gets into just basic, what we talk about all the time, change management, where it's like, find the thing that executive or board member is always harping on about or cares about and like, show them how AI can do that better for them. Okay, this one's from Tim. What's your best guess about the impact of this ability to purchase or check out? Right. Within ChatGPT? What's the impact of that on direct to consumer brands? They're now being, there's, you know, we just talked about this is referencing ChatGPT's recently released instant checkout feature. We talked about. If you're direct to consumer brand now, suddenly ChatGPT might be mediating between you. What do you think is going to happen there?
23:21
Well, I think it's. That's going to be a shift because they have 800 million weekly active users.
24:04
Right.
24:09
So if it was just some startup with 50 million in funding that found some way to do this, it's, it's certainly an interesting technological change. Yeah. But then when you mix in the fact that they have a partnership with Shopify. Right.
24:10
Yeah. Shopify is coming online now.
24:24
Yeah. So you're already going to do it. But anybody who has distribution, so I mean, distribution be different things in different industries, but distribution meaning they have an audience that already uses a platform. So if you think about Google, they have seven platforms that have over a billion users. So like Gmail is an example.
24:26
Yeah.
24:43
So when you have platforms where there, there's existing users and you, you make an innovation, you have immediate adoption that can shift markets. And so that's what I would Say, with this is OpenAI is a major player with massive distribution and the ability to build partnerships very quickly. And a lot of brands are going to want to be on board experimenting with it, not being left behind. So I think it could very quickly change the dynamics of how people shop and behave.
24:44
This next question is from Helen. She said, last year at Macon, you said this was the least capable AI you will ever use moving forward, and that AI will continue to change and improve. What is the one change that has shocked you the most and what is the impact that it's going to have on us?
25:11
Well, so Macon 2024, the final day. Around this time, on the final day, it was about three hours before the closing keynote, the Zero1 model from OpenAI came out, which was the reasoning model. And that was the first time in human history that we had an AI that could take time to think like that. All of us as users could actually go in and use a model that when you asked it something, it didn't basically just do information retrieval, it went through a chain of thought like a human would. Yeah. And I still feel like that is completely overlooked in society and in business that that technology exists. I don't think that, you know, I say, I don't think I can tell you from my personal experience standing in front of thousands of people this year and asking the question, has anyone tried a deep research project in Gemini or ChatGPT? And you see like two to five hands go up in a room of 300 people every time. So I know that business leaders and practitioners aren't even using the technology that is sitting in front of us.
25:32
Yeah.
26:36
So that to me is the most overlooked thing. It's the thing that, like, Mike and I looked at it right away. It's like, well, this changes how research firms work. It changes how consulting is done. It changes everything. And yet nobody talks about it. It's wild to me, but I get it. It's kind of like a hidden capability. And if you don't follow this stuff closely, you just don't even think anything of it. And that's why they're working so hard now just to integrate it right into the model and let it select when to go do the reasoning so you'll get more usage of it, but it won't be. They won't be aware they're doing it. They won't know what thinking means when it's seeing its thinking and stuff like that.
26:36
We talked about that a little bit when GPT5 was released. Right. Because suddenly with the auto Model router it all these people that didn't even know thinking was an option are suddenly being routed to better reasoning and more complex chains of thought. And that's probably a bit eye opening for some for sure. All right, this next question is from Robin. There's a little bit of a setup here, so a little bit extensive but all really important stuff and really good kind of context. Robin says my company is considering building our own AI for internal use like wrapping around an open source model because it's quote unquote more secure and safe than using the currently available licenses from frontier model companies. I worry this isn't a good use of our resources. The security and privacy policies of the frontier model companies, at least I assume like an enterprise license, appear strong. I fear the leadership team may not may be being sold on this when it's actually not a high risk issue. Am I wrong?
27:13
So this, you know, I'm real straight up with people when we're outside of our area of expertise and I would not take any if I was being asked to do this in a consulting environment, I would actually say let's go bring in some people who are more specialized in this area and it, the CIO's office, like those are the first people calling when it comes to this kind of technical integration. I will say what Mike and I have advised some enterprises to do because oftentimes we're talking with teams or departments within the larger enterprise. I will say let the CIO's office do what they're going to do, like let the IT team explore these domains they should be. So if you're asking this from the lens of you're in IT or you, you know, the IT is sort of driving AI adoption in your organization, let them do what they do and explore these bigger builds on open source models or whatever it may be. You can't as an enterprise though tell your marketing team, just wait six months, we're working on this big thing. We gotta go figure this out. And so what we often advise is sort of a parallel path. Go ahead and pursue the build and customization of the model so that the C suite and the board is more comfortable putting highly sensitive data in and letting it do these more complex things with trade secrets and such. But don't make the marketing team wait who just needs it for podcasting and social media shares and content creation and that no personal data is going to get into. There's no confidential stuff like leaking into it.
28:12
Yeah.
29:40
So yeah, I think you just have to again where the education comes in but you have to work closely with legal and IT and procurement and bigger companies should be pursuing both paths, I think is at the high level. What I would say here, and I've.
29:40
Had a few conversations during Macon related to this where it's just like awareness is also really helpful because I'm not coming down on one side of the debate or another, but a lot of people have asked me questions where like they weren't aware that OpenAI for instance, has enterprise licenses. Now that I don't, you again have to do your own due diligence. But the companies, the Frontier Labs, are offering these higher, more secure compliant tiers now as well.
29:54
Yeah, for sure. They know it's a barrier and they're obviously solving for it.
30:19
All right, this one's from Marion. How are you seeing AI or how do you think AI should be used in the nonprofit sector?
30:23
There's actually quite a number of nonprofit leaders here at Macon that I've talked to. I don't like, I don't really know that it's much different than anybody else. The nonprofit just generally has fewer staff. But then again, I talk to plenty of massive enterprises with tens of thousands of employees and every team is short staff right now. Like, yeah, they work in a big company, but it's like we're five people short doing what we're doing and we don't have any headcount we're allowed to get. So I think when, I guess I would back this to be when you're in a constrained environment and you have limited resources, the beauty of it is you now maybe have the ability to function across departments. So in a non profit environment where you have one or two generalists who maybe do a little marketing, maybe do a little fundraising, maybe are involved in operations, they're kind of wearing a lot of hats. Imagine being able to build a GPT for some areas that you're not an expert expert in. And then you have a generalist who just knows how to talk to, chat GPT and like has certain custom GPTs built to help it in areas where they're not strong. So I think from that standpoint, I would think about having generalists and non profits is very common and they now have at their disposal experts in a lot of different fields they didn't previously do it. Then you look at what are the problems and challenges and goals of the non profit and say how can AI help us pursue these or solve these better? And what I often advise there is literally just ask like pretend like you had a consultant and you're an executive director of a nonprofit and your biggest challenge is you're trying to, you know, you're in the middle of a campaign where you're trying to raise a million dollars or something or whatever that is and you just go and say, listen, we are resource constrained. We are a non profit. This is what we do. The goal I'm pursuing is this. Here's the five reasons we are falling short right now. I'm looking for out of the box ideas of how I can do this using AI technology spending less than $10,000. Like just give it the prompt like you were talking to a consultant in this space and see what it comes up with and then talk to it about that push on ideas.
30:30
You know that I keep also thinking about the idea of mission because nonprofits have this kind of almost in my mind, a unique advantage. Because like you have a real, you all have documented on your website. It's like very emotional, sympathetic, very clear mission of what you do. That is really good context for any AI use case. If you have something like a mission advisor, like how do we stay on mission? Content creation, messaging, brand voice. I could see a lot of really useful use cases.
32:35
Yeah. And then also like our job GPT might be helpful. And that again, SmartRx AI tools. You can go in and say, I'm the executive director of this nonprofit. Like help me identify use cases for my specific role. And it'll go through and it'll break down what you do into tasks and give you an AI rationale of how AI can help you. So that's another way to think about it.
33:02
All right, Paul, we're gonna do a handful of other questions here over the next several minutes. But I do wanna be conscious of giving your voice a break. Cause I know you've got some more speaking do at Macon. I'm lucky this is my last bit. So, you know, I lose my voice. I almost did lose my voice on a past podcast, but this one comes from Brian, Brian Piper, awesome community member as well. I've seen him at Macon here. What is it going to take for educational institutions to actually start integrating AI into teaching, learning and operations? Why is there no sense of urgency.
33:24
For most institutions, that's tricky. So we had Pat Young Pradit, who's the Chief academic officer@code.org on the stage yesterday and he was talking about AI in education and some of the challenges that they face there. You know, I've worked with and talked with quite a number of higher education leaders as well as high school leaders in the last couple years. I mean, the challenge they face is the same challenge they face when there's change of any kind. Universities aren't designed to be very nimble. There's tenure, which, you know, you have professors who don't want anything to do with this stuff and they're not going to change how they teach in the classroom. There's, you know, state funded schools who have to, you know, abide by the guidelines they're given or the test thing that they're required to do. Like, there's just way more structure and guardrails to how it's done. And the way I've seen it done really well is one at a leadership level, like from the provost to the deans, the commitment, like, hey, we're going to do everything within our power to move this forward. But oftentimes at university level, it's coming from the individuals. The professors are saying, I'm not letting my kids be left behind. Like, I'm going to figure this out. You know, people like Brian who are in the education space, who are at these events, they're doing everything they can. They're active in like, AI communities online and they're trying to find this and find other people like them. And so that's why I think it's just gonna be us. It's gonna be a battle in the education space. You're gonna have to. People have, like, fighting the good fight when they know that they can't be as innovative as maybe, you know, some other organizations outside of the education space. But yeah, it just needs people with the vision and the will to keep pushing and make whatever difference they can within the confines of education.
33:55
Yeah. I also just deeply sympathize because not only is education struggling to figure out, they're ground zero for these changes with how much younger people are using this technology.
35:35
Yeah. And you're going to. In the next year or two, you're going to have way more parents starting to question the value of education and is it worth the money we're spending? And what kind of job is my kid going to have if they're going into journalism or accounting or like even law firms? Like, if I'm going to the legal industry, what's the associate job look like in five years when I get out of law school or six years? We have no idea. And so as a parent, like, again, as kids, you know, my kids are 12 and 13. I'm already starting to think about this, like, what high school are they going to go to? What. How prepared is that high school to prepare them for an AI future. And then certainly at the college level, four or five years out when my kids are going to college. I cannot fathom funding 50 to $100,000 a year for a school where I don't feel like they're going to be properly prepared for the reality of the world.
35:43
Well, and the good news is like we've covered on past episodes, there are some schools, especially lsu, ou, that are, I mean we're biased Ohio schools, but like there, there are some schools putting out some very innovative programs.
36:32
Yeah, I mean we have at least three people from Ohio University here. I've seen like friends of ours, professors, so.
36:41
All right, this one's from Cindy. How does the average person, average knowledge workers stay on top of everything happening in AI? I feel like picking one tool and getting really good at it might be one path. But then my tech FOMO kicks in and I start scrambling to learn the latest thing. What's the best approach?
36:47
I don't remember what episode it was. Maybe we can find and throw in the show notes here, but there was one I gave like a five step process and I'll, I'll just, I'll just do the number one in that step, which was pick a platform and get really good at it. Like I get the, like the fear of missing out and all the tech. I'm looking around at our sponsor hall right now and there's like 39 exhibitors. Like.
37:04
Yeah.
37:24
And awesome tech. Like you could walk up to any one of these booths and probably find some value in the tech that they have. But as a starting point, just get really, really good. At Gemini or Copilot or chatgpt, they're coming out with new features every week. Like guided learning is a thing I love in chat here now that most people don't even know. It's amazing for parents with their kids. So just stay up on that stuff, get really good at the image, the video, the reasoning, the agentic stuff. Like that's going to be enough for most people is just be a power user of a platform.
37:24
Awesome.
37:58
All right, Paul, I'm just going to do one more here so we can make sure that you're ready to go for the rest of Macon. But this one is what signs tell you an organization is ready to move. Move from isolated AI pilots to scaling up adoption.
37:59
When the CEO puts out the memo saying they're ready to be AI forward. I don't know, like we, you know, we've worked certainly with some organizations, we've seen it scale up at a department level and they've moved past just the pilot phase where they're like experimenting in pockets and it is truly like, how do we improve workflows? How do we build smarter campaigns, how do we solve problems more intelligently? How do we drive innovation? Like, they're asking the right questions, but often at a department level, it's when the CEO says we are doing this. Like this is going to be across every department, every team. It is a requirement of you as a professional if you want to be here in 12 months that you reskill and upskill. We're going to provide the AI training for you. We're going to provide the resources, you're going to get licenses, you're going to be taught how to use them. Like, and I just don't see that much like even sitting here saying this. I'm trying to like in my brain try and come up with a great example for people. Yeah, Moderna is the one example I use in one of my AI courses for AI Academy because that's what they did. It was just top down and there was like weekly meetings and centers of excellence and like it was truly infused throughout the organization. I think that's what it takes when true scale can happen.
38:14
Paul, this has been an awesome, unique experience. I appreciate you answering all these questions on top of a busy Macon schedule. It's been an incredible event.
39:22
Yeah, thanks everyone for the questions and again, we'll be a week removed from this when this comes out, but it was amazing. Like, we appreciate everybody being at Macon this year. We will have announced the 2026 dates. I can say it probably because no one's going to hear this. I think it's October 13th to the 15th in Cleveland. 2026. Since this will have come out after. We haven't announced this yet on stage. So check Macon AI and it's Maicon AI and you can check out for next year and On Demand for this year should be live by the time you listen to this.
39:31
Excellent. Thanks so much, Paul.
40:04
Thanks for listening to AI Answers to keep learning, visit SmarterX AI where you'll find on demand courses, upcoming classes and practical resources to guide your AI journey. And if you've got a question for a future episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions about AI.
40:06