The Artificial Intelligence Show

#169: AI Answers - AI for Job Searching, Cutting Through the AI Noise, SEO vs. GEO/AEO, The Loss of Critical Thinking & How AI Is Reshaping Education

63 min
Sep 25, 20257 months ago
Listen to Episode
Summary

This AI Answers episode covers 20 questions from business professionals about AI implementation, covering topics from job searching and SEO evolution to education impacts and responsible AI practices. The hosts discuss practical AI adoption strategies while emphasizing the importance of maintaining human-centered approaches to AI integration.

Insights
  • AI proficiency doesn't require technical background - success comes from daily experimentation with tools like ChatGPT and Gemini as conversational partners
  • The future of advertising and SEO is uncertain as voice interfaces may eliminate traditional ad placement opportunities, forcing marketers to rethink distribution strategies
  • Companies should focus on using AI efficiency gains to drive growth and innovation rather than replacing humans, transferring time savings into higher-value activities
  • Critical thinking skills are at risk if people become overly dependent on AI, but guided learning features can actually accelerate skill development when used properly
  • Authenticity and human-centered approaches will become key differentiators as AI-generated content becomes ubiquitous
Trends
Shift from SEO to GEO (Generative Engine Optimization) and AEO (AI Engine Optimization)Rise of AI-powered job application automation creating arms race between applicants and HR systemsGrowing adoption of reasoning models and deep research capabilities for complex analysisIncreasing use of AI avatars and voice assistants in customer service interactionsMovement toward guided learning AI that teaches rather than provides direct answersEnterprise adoption of private AI models for sensitive industriesIntegration of AI into educational systems despite institutional resistanceVoice-first AI interactions potentially eliminating screen-based advertisingAI agents becoming companions and relationship substitutesConsolidation risk for smaller AI companies lacking sustainable business models
Quotes
"The idea though, is use AI to accelerate growth and innovation to the point where you don't need to replace people. You transfer the savings that AI provides to you, the lift it provides to you, into doing things that accelerate growth and innovation because there is more demand for your product or service."
Paul Roetzer
"The beauty is you don't need a technical background to get the benefits of AI anymore. So whether you're using ChatGPT or Google Gemini or, you know, some people use anthropic Claude. Usually those. Those tend to be more the tech technical users that are using anthropic Claude. But Google Gemini, ChatGPT, you just need to be able to talk to it."
Paul Roetzer
"If we let it happen, it's going to be a major problem. So the way I think about this is we all, you know, get to the positions in our careers we are because we solve hard problems."
Paul Roetzer
"I think the people who use the technology to improve themselves in the long run always win."
Paul Roetzer
"I think too many companies get caught up in the security and privacy risk stuff that's pushed by IT and legal, and they take too long to move to the innovation side. And I think that's maybe the greatest risk is obsolescence from not moving fast enough."
Paul Roetzer
Full Transcript
2 Speakers
Speaker A

The idea though, is use AI to accelerate growth and innovation to the point where you don't need to replace people. You transfer the savings that AI provides to you, the lift it provides to you, into doing things that accelerate growth and innovation because there is more demand for your product or service. Welcome to AI Answers, a special Q and A series from the Artificial Intelligence Show. I'm Paul Raitzer, founder and CEO of SmartRx and marketing AI institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of AI. But we never have enough time to get to all of them. So we created the AI Answers series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing. Whether you're just starting your AI journey or already putting it to work in your organization, these are the practical insights, use cases and strategies you need to grow smarter. Let's explore AI together. Welcome to episode 169 of the Artificial Intelligence Show. I'm your host Paul Rar, joined today by my co host Kathy McPhillips, our chief marketing officer at SmartRx. Welcome back, Kathy.

0:00

Speaker B

Thank you so much.

1:20

Speaker A

We get to do like two of these a month together, plus all of our glasses and everything else we do together. So if you are listening to the podcast and you're a regular weekly listener, this is a special series. So we started doing this a few months back where we do these AI Answers. These are presented by Google Cloud. So we basically take our intro and scaling AI classes that we do each month. So we do one free Intro to AI class each month and one free five steps to Scaling AI class each month. And then we take the all the dozens of questions we get from those that we don't get to answer in our time during the live events and we answer them as part of this AI Answers series. So I, I don't know, this is what, our third or fourth one of these?

1:21

Speaker B

It's, I think it's our fifth.

2:08

Speaker A

Oh my gosh. All right, well, so everyone's two of these, so if you're new to them, you can go back and listen. We try and mix it up. Kathy will give a little bit more breakdown of how this all works, but we try and mix up the questions. So if you've listened to past ones, hopefully you know, you hear new things each time and sometimes we'll bring back kind of the greatest hits. The questions we get all the time, we'll try and kind of keep those fresh. So again, special thanks to our presenting partner, Google Cloud. We have a great partnership with the Google Cloud Marketing team. In addition to sponsoring this AI Answers podcast series, they are also our partner for those monthly classes, the Intro to AI five Essential Steps to Scaling AI, a collection of AI blueprints, and our Marketing AI Industry Council. You can learn more about google cloud@cloud.google.com and then also I would give a little plug for their new AI Boost Bytes series of short training videos. These are really cool. They're about 10 minutes each and it's just meant to build up your skills and capabilities in AI very quickly. They go through specifically often like AI technology from Google and Gemini, things like that. So really cool series to check out. They just launched that in August 2025. We will put a link to the announcement post and you can learn more about the Boost Bytes. There's I think dozens of them available now for people to check out. And then this episode is also brought to us by Macon 2025. This is our flagship in person event. It is happening very, very soon. I think we are officially like three weeks out. If I'm not mistaken, Kathy, I see the countdown clock when I go to the make on site. So October 14th to the 16th in Cleveland. This is our sixth annual marketing ad conference. We are expecting 1500 plus. We are continuing to trend in that direction. So it is looking like a great turnout this year. Dozens of sessions, some amazing speakers. You can learn all about it at Macon AI that is M A IIcon AI episode 168 of the podcast. I went through a full breakdown of the main stage sessions. We have 10 like general sessions where everyone's together. We announced nine of them and I think we also have an email going out this week that Kathy is probably getting ready to send soon with all that information or by the time you listen to this, you may have gotten that email, correct? Use pod 100 for 100 off your Macon ticket. So again it is Macon AI to learn more about that event and we would love to see you in Cleveland October 14th to the 16th with me, Kathy and the rest of the SmartRx and Marketing AI Institute team. All right, Kathy, turn it over to you.

2:09

Speaker B

So this week I held Macon office hours.

4:45

Speaker A

Oh, how'd it go? I haven't talked to you about that.

4:47

Speaker B

I know it was great. So a lot of people just wanted to talk about, you know, is Macon right for me? I am already registered. I don't know how to tackle the agenda. What should I Do. So I would, you know, a lot of questions on what's. What are you trying to do in the next three to six months? What. Tell me about your team, your industry, all of these different things, your role, obviously. And we walk through the agenda. I am so excited for this conference. I'm like, what can I actually sit through just thinking about Andy Cressidina Talking about custom GPTs, Elise Horse talking about email platforms, Taylor Rady talking about AI toolkits. I'm like, oh my gosh, I can't wait to listen for my own job.

4:49

Speaker A

That's so cool. Yeah. And if people don't know what Kathy's referring to, she actually had. This would look like a crazy idea to just basically open her schedule and let anyone reach out to her and schedule a. 15 minute blocks, was it?

5:25

Speaker B

And it went super fast.

5:36

Speaker A

Yeah. Just to like talk about, make on, ask questions, find out if it's right for them. So I was a really cool initiative. I was happy. I was happy it was you doing it, not me. But I love the idea. Again, it's the more human side. I always talk about this more intelligent, more human. And part of Our goal at SmartRx is use AI to automate the things that are low human touch, you know, that are. That are good to be automated and then free ourselves up to do the more human stuff, like take 15 minute calls from people or think about coming to an event. So that's a really good example of kind of living that vision.

5:37

Speaker B

It was fun.

6:07

Speaker A

Cool.

6:08

Speaker B

Okay, so we're going to dive into some of these questions. So this is again from our September 18th Intro to AI class. I also grabbed a few questions from our Slack community. Last week, Macy put a question in our Slack community about, you know, ask us anything and she'll forward questions of the team. If other community members wanted to jump in and answer, they could. And there were a few nuggets in there. I weaved into this.

6:09

Speaker A

Right.

6:31

Speaker B

All right.

6:31

Speaker A

And I think people have done this with us for. I actually have no idea what the questions are. Kathy did send me a brief that has them, but I haven't even looked at this. So the way I always think about it is when we're doing these classes live, I don't see the questions in advance. It's whatever people are asking. And so even in the spirit of that, when we do the podcast, I don't prep for these. It's just kind of whatever Kathy's got. If I have an answer, I'll give it. If not, we'll Do a little research and get back to you.

6:32

Speaker B

Yeah. And again, as always, Claire does a lot of the heavy lift behind the scenes and gives us this beautiful brief that we worked, that we worked from. So let's jump in. Number one, what does it take to have proficiency in AI for a person with a less technical background? And I know you've answered some of these before, but I think things have just kind of evolved. And I think, again, to your point earlier, some of them just bear repeating.

6:56

Speaker A

Yeah. The beauty is you don't need a technical background to get the benefits of AI anymore. So whether you're using ChatGPT or Google Gemini or, you know, some people use anthropic Claude. Usually those. Those tend to be more the tech technical users that are using anthropic Claude. But Google Gemini, ChatGPT, you just need to be able to talk to it. Like, and I always tell people who are not sure how to get started just talk to it like a human. And I know that sounds weird, but talk to it like a mentor, an advisor, a strategic thought partner. A lot of people talk to it like a therapist, like, whatever it is you choose to use it for, a party planner, a travel planner, like, whatever, just talk to it and experiment yourself and get used to what kinds of prompts produce the best results. And that's really the best way to develop proficiency, is just keep experimenting with it. Always try and find some new way to test it out. I'll often send a podcast. Like, I use it a lot with my kids to try and help guide them. So, you know, if it's math and I don't know how to help them, I'll use the guided learning function in Gemini to help myself so I know how to better explain things to my kids. I use NotebookLM. I've recently started playing around with, like, their quiz function where you can, like, build quizzes for things. So, yeah, I think it's just all about experimentation and kind of like an open mind to this stuff. My dad's a regular listener to the podcast and he and I were talking a couple weeks ago and he's like, you know, I think I. I think I should get Chat GPT. And I'm like, yeah, I think you should. Because, like, he listens to podcasts all the time. It'd be, like, cool to just experiment with that stuff. And I think that's what it takes is just, you know, an openness to say, hey, I think I'm going to, like, play around a little bit, see if I can find a couple of uses for this thing. And you develop proficiency quite quickly just through experimentation.

7:19

Speaker B

I'm shocked he doesn't have Chat GPT.

8:58

Speaker A

I know. So I, I, I said the same thing. I was like, no, we definitely got to get it. Because he was saying, like, you know, is it worth the 20 bucks a month for someone like me? And I was like, you don't even need to spend the 20 bucks a month. Like, just get the free version and start playing around with it. And then if it, you know, you see value in it, go ahead and do the 20 bucks a month.

9:00

Speaker B

Another thing to, to think about when you're, if you're just testing it out is talk about something that you actually know the answers to. And, you know, see how you're prompting it. Like, am I talking to this the right way? Or how can I be crafting my, my prompts even better.

9:16

Speaker A

Yeah, like one again, one real quick example is just find those use cases, even if it's in your personal life. Like, we were, we're planning a trip to Universal in Orlando. My family and like, one of my relatives was like, you got to look into the fast passes. Like, you can't go to Universal without the fast pass. You'll never get on anything. And so my wife's trying to look into it, and she goes, I am so confused as to, like, what which is which do you get the three park the two part. And so I'm just in chatgpt. I was like, all right, well, let's just go figure this out. And that's the kind of way you start to just get used to it, and then you connect the dots of how you can then use it in your professional life. When these, like, questions come up or problems are there to be solved, you realize like, oh, I can just ask. I can just go talk the other night.

9:29

Speaker B

We'll get on to question two in a second. But the other night I was, my husband's like, are you working on Macon stuff? Because it was like 10 o'. Clock. I was like, no, I'm planning our trip to Scotland. He goes, oh, carry on. Keep planning. Okay, number two, what is the best way to use AI in a job shirt? Job search.

10:07

Speaker A

Oh, boy. Yeah, we've talked a lot about this lately. I, I think there's simple things like using it to help you, you know, maybe analyze a job profile to see if it's a fit for you. So, you know, if your Chat GPT instance, for example, knows your interests and past, you know, conversations and things like that, to be able to Take a, you know, job description, drop it in and say, you know, do you think this could be a fit for me? Could you help me craft a cover letter? So there's things like that. But there's also a lot of AI tools that have been built and companies being built to help people with the job search. So I know I have a relative who was recently in the job market and he was saying how he was using these automated tools to like find the jobs and even apply for the jobs. That was one of the things we talked about on episode. I think it was like 166167 where people are automating this job application because it's so hard to get through that first filter because a lot of the companies are now using AI to filter the submissions. So I think there's the basics of just helping you communicate better through emails, through cover letters. Then there's the extremes of, you know, automating the job search process through AI. So that, that is, I think, a growing field and something. If you go and do some searches, you can find tons of information. Like I said, you can go back and look at the past few episodes and see where Mike and I talk pretty extensively about the impact of AI on the HR process from the company side and on the job seeker process. It is dramatically changing the way that all of that works.

10:23

Speaker B

And do you think in the recent people that have been applying for jobs with us, have you seen the AI ones come through and is it obvious?

11:59

Speaker A

I don't, I'm not the initial filter for these things. I would probably lean more on Tracy for that answer. I would be shocked if people weren't using it. And honestly, like, it's kind of the point now where I assume people are. And I, I would probably be more. I don't want. Disappointed is the right term, but I would just say I would expect people who are applying for jobs with us as an AI company to probably be using AI. But I also want to make sure they're not overly reliant on it, that they actually have high levels of confidence in what they've sent to us. If I, you know, say, for example, like a 30, 60, 90 day plan for someone we're going to hire, I don't want to feel like ChatGPT wrote that plan and that they have no ability to present that plan without their notes. So, yeah, it can get, it's a kind of a slippery slope. If you become too reliant on the AI to get the job, then, you know, you're not going to be maybe positioned for success when you first get started. But the other thing I would say is like I know Companies are using AIs to conduct the initial interviews. Sometimes like your first experience with a brand may actually be being interviewed by an avatar or something like that. So it's. If you haven't been in the job market, it has changed. And I think AI is increasing, playing a role on both sides of the equation.

12:08

Speaker B

Agreed. Okay, number three, how can one find a clear learning path in the whole noise of AI tools?

13:20

Speaker A

I always tell people just to keep this really simple. So if you're at the starting point or even if you've been dabbling for a little while, I always say just get really good at one AI assistant. You know, ChatGPT or Gemini are the two most obvious ones to pick from and just use it every day. And then you can always add layers of complexity over time. So like NotebookLM, I always tell people, that's a great one. Get comfortable using that, find different use cases. That's a Google product if you're not familiar with it. Deep Research is a great tool that is available in Both Gemini and ChatGPT that uses their reasoning capabilities to do research projects that may have taken you hours or dozens of hours to perform. It does it in like 10 minutes. You have to verify the outputs and things like that. But if you just use an AI assistant throughout your day as an assistant to your workflows, planning, you know, helping as a thought partner, and you use it to help you create outputs like emails or articles, you know, things like that, where you're just getting used to, you know, experimenting with it, that's the best way. And then you just, you eventually start honing it to your specific skills or your specific job. So maybe you're laying her in image generation or video generation or audio capabilities or using the reasoning models more and more to help you with the deep thinking or long horizon tasks. But Chat GPT in Gemini, just get really good at using them, really good at prompting. That is like the basis for success. And that'll get you like if for many people that'll get you like 80% of the way. There is just getting good at using one of those platforms.

13:28

Speaker B

Yeah, that kind of leans into number four. I have a limited time to explore AI. If I wanted to focus on learning one model or tool in depth. Which should we start with or is it a mistake to use just one?

15:03

Speaker A

It's not a mistake. So I would say if, if this is in your personal life, you can't go wrong with, with Gemini or ChatGPT, they are very comparable models. They're going to perform in very similar ways. There are certainly nuances to them, but overall you can pick one and go with it. Whatever you're more comfortable with. If you're in a work setting and you are provided a license to say Microsoft Copilot or Google Gemini or ChatGPT, then focus your energy on getting the most value out of the thing you're provided. But I would say like sometimes what we see happen in the corporate setting is people might be provided a license and you know, let's say it's like a copilot license, but it maybe it doesn't have the full functionality of ChatGPT. So if you're not aware, Microsoft Copilot is built on OpenAI technology. So they're, they're basically serving up OpenAI's models in a copilot wrapper. But sometimes those don't have the full functionality of the standard ChatGPT model you would get directly from OpenAI. So I have friends who use Copilot at work and it doesn't have a lot of the capabilities that they have with ChatGPT in their personal account. So yeah, I would say again, I use both. I do use Gemini and ChatGPT regularly. I bounce back and forth between the two. If it's a high value use case, I will use both of them with the same prompt. I'll actually experiment with how does the output differ from these two? Other than that, I don't know. It's really just kind of like, you know, a use case pops up and I literally my phone, the apps are next to each other and it's just kind of like which one do I click at that time to see what it does? So I don't know. I think if you have the money to have an account for both and you can play around with it, experiment with them, do it. If not just get really good at one of them. I really don't think you can go wrong with either of them.

15:14

Speaker B

Agreed. Okay, number five, are there specific areas where AI models can help nonprofit foundations, small businesses, or resource strapped teams beyond content writing and research?

17:06

Speaker A

Yeah, that's a really good question. I mean there's literally like thousands of use cases. What I would say is look at what your role is. So what are, you know, what are the objectives of your role? What are the tasks related to your specific role? You could broaden this overall to a team or to the organization. But this is a great example of just, just use the AI Assistant of your choice and ask this question to it. You know where I would do here like a sample prompt? I would say, okay, I'm a executive director of a nonprofit that's focused on this. You can even give the copy and paste the URL if you want. Now here's our website. I'm trying to find ways to drive efficiency and productivity for a team that doesn't have a ton of financial resources. We want to like maximize the impact of AI. We have access to ChatGPT team license for our people. Help me find, you know, the best ways to use this. What are the highest impact ways to use it? Could be things like grant writing as an example that might fit under this content writing research umbrella. But that might be one as a strategic thought partner is like the dominant way I use it. I've, I've used it in that way. I'm on a nonprofit board. I've used it in that way to assist in development of ideas for the nonprofit board. So I would have that conversation. You could use our Jobs GPT tool and again, just put in specific titles of people within the nonprofit and it'll help you do this. So on SmartRx AI under Tools is Jobs GPT. And so it's meant to just like put in job titles and then it'll help you prioritize AI use cases. The other one we have under that same navigation, SmartRx AI to tools is campaigns GPT. And so you could put in different campaigns you run as a nonprofit, such as a fundraising campaign, and it'll help you ideate ways to use AI within a nonprofit setting.

17:20

Speaker B

It could help you with building, building outcomes on how your nonprofit is successful and what, what's working as far as, you know, do in your emails, what people donating, what's. What are the keywords in those emails that are getting people to donate? I mean, there's so many things I could think of. Yeah, that would be cool ways to help.

19:11

Speaker A

It's like endless. Yeah. And then the other one, actually, I guess the other GPT would be really relevant. Here would be the problems GPT. So if you go in and, and just put in like, here's the challenges we're facing as a nonprofit. That GPT is designed to help you prioritize, like write problem statements and then prioritize them for AI to support your efforts.

19:27

Speaker B

Yep. Okay. Number six, if AI is not used to replace humans with writing, thinking and innovation, what are the primary drivers of ROI for companies?

19:46

Speaker A

Well, the two that we always look at is just efficiency gains so doing the work, you're already doing faster, thereby saving time that you can repurpose to other higher value things. So let's say we apply AI in another number of areas. That saves us 20% of the hours we would have spent on that stuff over a period of time, say 1 month, 3 months, 12 months. Rather than getting rid of humans, we can then take that 20% time savings and apply it to initiatives we didn't have the time to do previously. So that is one way to do it. Another way to think about it is a productivity lift, which is we used to do one campaign a quarter, now we can do three. So we're able to do more within the same amount of time with the same amount of human resources. So that's where it comes in. So again, like some companies will choose to take those benefits and replace people with it because the demand for their company isn't as high. So if demand remains flat for products and services and a company saves all this time or can create more output, then they might just get rid of some people. The idea though, is use AI to accelerate growth and innovation to the point where you don't need to replace people. You transfer the savings that AI provides to you, the lift it provides to you, into doing things that accelerate growth and innovation because there is more demand for your product or service. So that's the ideal situation as you use the savings, the lift, to drive growth and innovation. It's just not every company is going to be in that position. Some companies just have flat or decreasing demand. And in that case, I think that replacement of humans is probably a more likely outcome. It's just economics. It's not necessarily even bad people running a company making bad decisions. People have fiduciary responsibilities to their shareholders, to their investors, whatever it may be. And so that's just kind of a byproduct. And I think part of the individual thing to be familiar with here is are you in a company that is sees decreasing demand? You know, if the company you're currently at is flat or declining in growth, then there is a higher probability that AI will eventually probably replace some people at that company, that they will just take those savings and reduce the headcount. If you're in a company that has a significant addressable market ahead of it and you're seeing growth and you know, increasing demand for products and services, then the end you have people at the top, at the leadership position who see the potential to unlock human capabilities here and like enrich them and augment them. Then you're in a good place. So that, you know, kind of how I guess I would think about it.

19:57

Speaker B

Okay, number seven, how do you see marketing in the future? Will we still have ads, and if so, in what form?

22:50

Speaker A

I can't see a future where we don't have ads. How they get served is certainly in. In large part going to be determined by what happens with Google. So if we just think about how much of ads goes to Google and even, I guess you could extend that to like, YouTube and things like that, and then even meta. So we think about the digital ad space, how people find information, how they interact with each other, how they make purchasing decisions, how much of those future things are done by our AI agents and not by humans. There's just a lot of unknowns, a lot of variables that will affect how ads are created and distributed and the impact those ads have. One big one to think about would be voice. So if you and I, all, all of us get really comfortable talking to our AI assistant, so we don't go to Google and put in a search and see all the links. We don't even maybe go into the chat interface and have a conversation that has the potential to show links to us. We just talk to our assistant and it responds with the answers. Or it says, I can make that purchase if you would like. And it goes and does it. And we never see a screen where an ad can be presented to us. Well, if they don't find a way to inject voice ads into those conversations, which would probably ruin the interface, then what? How do we get to people? I will say, like, nobody really knows the answer to this question. I have talked with executives in the ad industry who are asking these same kinds of very important questions about the near future. The same could be said around how people find your website or your products and services through organic search results if they're not looking at a screen. So ads will continue. What they look like and how they're served to us, I think is a really big unknown for all of us.

22:58

Speaker B

Yeah. I mean, it's just changed so much since we started our careers.

24:54

Speaker A

Yeah.

24:58

Speaker B

Yeah. I mean, when I. When I was at an agency 30 years ago, there was like a digital department. Yeah. And there were like two people.

24:58

Speaker A

Yeah. We can aid ourselves here. Like, I remember my internship, I was doing a PR firm, but we had those. The big media print books, like thousands of pages. And you would build a media list by literally going through and highlighting stuff and then copying on the copy machine and then giving it to the office admin who would then enter them into an Excel chart. Like, yes, things change, the industry transforms.

25:06

Speaker B

Yes.

25:28

Speaker A

I just don't know. It's really hard to look out ahead and figure out what this looks like. I know AI is going to help create the ads, that much we know. Whether they're image ads, video ads, audio ads, AI is going to play a massive role in the creation of that stuff.

25:30

Speaker B

Okay, number eight, how much should we trust our time and money investments in this technology when none of the major players in the space currently have a defined path to profitability?

25:49

Speaker A

Yeah, no, I mean Google certainly is a very profitable company. This part of their business I wouldn't say is profitable yet. Like they're not. I wouldn't say Gemini is probably a profitable business unit. But Gemini is infused into everything they do. Like it's built into Google Workspace and Google Cloud and you know, their search and everything. So you know that that's one of the things is you can say, well, if I had to bet on one company that survives through this and leads, Google is probably my bet. So that's a, that's a safer bet than others. You could make an argument something like Anthropic is gonna struggle. Something we might talk about actually on episode, episode 170. Would it be, I think as our next weekly. Yeah, yeah. Like I'm seeing some stuff that makes me really wonder about the long term viability of Anthropic right now. Both their relationship with the government is a major one and then OpenAI's seemingly endless ability to raise money to build up the infrastructure for what they want to do. I don't know that Anthropic is going to keep up. So I don't know. I think OpenAI is as safe a bet as you can make on a future player that will be significant in the economy for a long time. I think they have sort of hit escape velocity where anything is possible, but they certainly seem like a safe bet to be made right now. But when you start getting into these other more niche players like video generation technologies or image generation technologies that don't live within one of the major platforms, that's more questionable because they're ripe to be acquired or aqua hired by one of the big players and then who knows what happens to their tech. So that part I would be a little bit more worried about building your technology stack at a company around these startups that are getting these crazy valuations and raising a bunch of money. They, they, they are not predictable at all at this point and some of them are really good companies and I have no confidence that they'll be around in 18 months.

26:00

Speaker B

Yeah, but when you talk about the major players in a defined path, there's a lot more we don't know about.

28:03

Speaker A

Yeah, yeah. And, but I, again, I, I feel like it's a pretty good bet if you're looking at like a Google, a Microsoft Meta is not really much on the business side. OpenAI, the ones that are like total unknowns to me is like XAI and Anthropic and things like that. But I think you're in pretty good place to bet on Microsoft, OpenAI and Google at this point.

28:09

Speaker B

Okay. Number nine, I am being asked to help rank clients on ChatGPT. Can you speak to the change in SEO to GEO and aeo?

28:39

Speaker A

Yeah, come to Macon and go to Will Reynolds session and Andy Kristadina session. Probably. I, I okay, so I always caveat this, like, I do not claim to be an SEO expert. I ran a marketing agency for 16 years that I sold in 2021. We have done SEO work for SmartRx, the marketing institute, through the years. I will say my general approach right now to SEO is to create as much value as possible for people through as many channels as possible where they will find us. So I stopped asking for reports on our organic traffic probably 18 months ago. I assumed years ago that organic traffic was going to go to zero, that it just would not be a key part of what we did. That that time is when you started to see this kind of emerging industry of, well, how do we get ranked in the large language models? So we show up in ChatGPT or in Google Gemini when they search and no one knew at the time. And I'm actually convinced that the AI companies themselves weren't sure what the algorithms would be to surface stuff within there. So I say all that with I am not an expert on what are the latest tips and tricks to get ranked within these tools. And I think it evolves all the time. Our strategy is we know our audience consumes podcasts. We know they consume information on YouTube, we know they continue to subscribe to our newsletter and come to free classes. And so when we think about our strategy of value creation, how do we help as many people as possible? We don't center that strategy around getting people to our website. We assume that if we put out video of the podcast and we include transcripts to that podcast and we do free courses and we put things on YouTube that, that we know that the language models train on that stuff a variety of things and so our strategy is let's just help people and create value in the dominant channels where we know people search for information and we also know these models learn from. And over time we're just kind of playing the long game that, that should work. But I would say we, we just don't depend on organic search results for our business. I mean, we made that change years ago. I don't, you're the chief Marketing officer, Kathy, like any other context that you think about, like.

28:47

Speaker B

No, I agree. It's just answer the questions, change our wording a little bit, you know, test some things out, see what's, see what's resonating, see where we are getting, where we are showing up on some of those search terms. And I'm still watching organic traffic. It's not like I don't pay attention to it. But you know, everything, everything you said is, is spot on.

31:11

Speaker A

And I know our traffic from ChatGPT in particular has been skyrocketing. Like, I know what we're doing is working, how it's working, I couldn't tell you like the analysis of it, but I know our traffic is increasing from those tools.

31:30

Speaker B

Okay, number 10, should companies be investing in their own AI infrastructure or is it safer to rely on external platforms?

31:45

Speaker A

So AI infrastructure is kind of a broad term. It's hard to know exactly what the listener or our, you know, attendee for the course is thinking in terms of AI infrastructure. But if, if we think about it from a corporate setting of like, let's say building your own model or like, you know, bringing an open source model and training your own version of a chatgpt internally, basically, I think for a lot of organizations, especially ones in highly regulated industries or industries where keeping information private is much more a much greater consideration than others, those people are probably going to have like private installments of models. Their IT team, their cio, they're going to probably work with these major model companies. They're going to bring in a model or an open source model and then they're going to do some fine tuning, some training on top of it, and then everything may live within, you know, their internal servers, not up in the cloud. Like that's going to happen in these more sensitive industries. Generally speaking. I just think about, you know, for the last 20 years we've all just kind of moved everything to the cloud. We don't build our own CRM software, we don't build our own website management system, our learning management systems. Like we rely on third party technology, we let other people consume the cost to build the initial thing and then we benefit from the monthly, you know, fees that we pay to build on that infrastructure. I don't think that that really changes with AI. I mean, I still look at it even as our company and say, okay, we're 20, 25 bucks a month per person and we have access to intelligence on demand through ChatGPT and Gemini. And so I think that for a lot, especially SMBs, small mid sized businesses, you're just going to treat it like you would any other cloud like SaaS solution where you're just going to go pay your monthly fee and take the best available and let the software companies and the hardware companies spend all their investments building that so we can all benefit from it.

31:53

Speaker B

And we're seeing that even with our education. You know, if companies are saying, oh, we need you to educate our teams and we're like, oh wait, you already did it. Yeah, we'll just use you.

33:57

Speaker A

Right?

34:05

Speaker B

Save us so much time. Right, okay, number 11. A negative impact on humanity seems like one of the biggest risks of AI. How can we mitigate those risks? Through corporate and business responsibility, the thing.

34:06

Speaker A

We always guide people on. If anybody listening has taken any of our AI Academy courses, the Scaling AI course series in particular, we go through these five steps that every organization should take and one of them is responsible AI principles and generative AI policies. And so I think that what you need to do is have that foundation for your organization, for your people, that this is how we approach AI from a responsible perspective and then infuse that into education and training internally and the change management. I've too often seen where companies have taken the steps to develop responsibly principles, generative AI policies, but then they're not fully trained. Like their people don't sometimes even know that they exist. So it hasn't really become like a part of the culture and the fabric of how the business operates. But I think that's essential and that has to come from the top. Like you really need to, you know, at the CEO level, ideally to be, you know, pushing the responsibility of principles, the use of that technology in a human centered way. Otherwise everybody's going to kind of freelance and do their own thing and they might not even intentionally make bad choices. But it's very easy to make mistakes with your use of generative AI if you don't have the proper education and training internally and you don't have that governance in place.

34:19

Speaker B

Yeah, and I think there's such a opportunity to not only tell people what they shouldn't be doing, but tell them what they should be doing. Like, I don't like having, like, here's your list of things you can't do. Like, okay, but like, there's such an opportunity to tell them all the good things they could be doing with it.

35:41

Speaker A

Yeah. We always talk about the guide responsibility. Policies are just guidelines to empower people. Like, you want to enable experimentation and innovation within a safe environment. And so you're providing guardrails to actually open up what people can do. So they're not always worried about, can I do this or can I use this data set? They know what they're allowed to do and then they can push innovation with that knowledge in place, especially since they're.

35:55

Speaker B

Going to be doing it anyways.

36:23

Speaker A

Yes.

36:24

Speaker B

Okay, number 12, what are your thoughts on the loss of critical thinking?

36:26

Speaker A

If we let it happen, it's going to be a major problem. So the way I think about this is we all, you know, get to the positions in our careers we are because we solve hard problems. We go through the process. It's not always fun. I always think back to, like, learning math in high school. It was not my favorite thing, but I had a teacher when I said, like, why do we do this? And he said, because it's hard. Like, because you're learning how to do hard things and solve problems and think in critical ways and build plans of how to attack a problem. And so that is, we know that to be fundamental to success in business and in life. And so if the whole next Generation just asks ChatGPT how to do everything and they can't critically assess the outputs of ChatGPT because they've never had to actually do the work themselves, that we can lose that ability very quickly. That's why I really love that. Gemini and ChatGPT both now offer a form of guided learning. They call it different things, but basically it doesn't give you the answers. When you turn on guided learning, it teaches you how to solve the problem. So I think I mentioned earlier, like the example of helping my daughter with a math problem. You know, if she says, I'm not sure how to do this, I don't give her the answer. ChatGPT doesn't give her the answer. Gemini doesn't give her the answer. I will say, okay, let me see the question. And then I will go into the guided learning and say, okay, help us solve this. And it'll say, okay, here's the, you know, how do we think about this first step? And we think about the first step we put in. Okay, you're on the right track, it never gives you the answer. And so I think that that's a great approach for students. But that same approach can be applied to accelerate learning and critical thinking within business environments. When you have entry level employees who don't know how to solve hard problems in business, imagine like a guided learning model for them, like, how do I build this marketing strategy? It doesn't just write the marketing strategy for you. It forces you to go through the steps to collaboratively build the marketing strategy.

36:31

Speaker B

Right.

38:29

Speaker A

So I actually think if done right, we can accelerate like learning and capabilities and domain expertise way faster than we all picked it up. It took years of trial and error for all of us. And maybe you didn't have a great boss or mentor and you got none of that in the first few years that no transfer of knowledge or capabilities. But if we all have an AI assistant on demand that has advanced reasoning capability, if we don't use it as a crutch and instead we use it to improve ourselves, there's always going to be people who take shortcuts and don't do that. They're always the people just use the calculator, never actually learn how to calculate percentages or anything like that. They're always just going to lean on the technology. But eventually that catches up to you. I think the people who use the technology to improve themselves in the long run always win.

38:29

Speaker B

Agreed. Number 13. How are organizations putting ethical AI frameworks into practice? And where should they draw the line on privacy?

39:19

Speaker A

So I mean, put them into practice again goes back to this idea of having the principles and policies in the first place and then training your teens on them that that's really the only way to do it. You have to define what they are and then you have to integrate them into everything. So I don't know, like this is kind of like related. But when we build strategies at SmartRx, I always try and encourage the team to think about what's the more intelligent side, what's the more human side. So if we're going to do an email campaign, the more intelligent side is going to be okay. We can automate pieces of this. We can use AI to help us with subject lines. We can maybe use AI to help with the analysis of, you know, the performance of the campaigns, things like that. It's like, okay, well what's the more human side? Well, if we save five hours here, Kathy can do an office hours on Monday with Mekon attendees. Like, so that's how we start to think about this. And that's because our ethical AI framework is based on our responsible AI manifesto or principles that say it has to be human centered. So everything for us has to come back to how does this benefit our team? How does it benefit our audience, our community? And as long as you think about that and it's core to everything you do, it becomes second nature. Like, I don't have to stand around telling our team this every day and put it in our team chat every day. Like, hey, don't forget this. It just becomes part of the culture and then everyone you hire picks that up from other people. It's like, oh, cool. Like, we actually do think about the human in this equation.

39:28

Speaker B

Yeah, but I mean, this is kind of related, but like, even just the human side of me writing those emails, do I use AI sometimes to like, pull out some nuggets of, you know, what has worked and where we got the conversions? Absolutely. But like, I wrote one two weeks ago. I was so proud of myself. Like, I was like, that was a really good email. And like, I just wouldn't have had that, you know, feeling of being like, proud of myself if I would have been like, oh, yeah, cut, paste, put it in there and send it out. It's just, I like doing that sort of thing and it just, it gives me pride and it gives me excited about the event even more than I already am, which is probably not even possible.

40:50

Speaker A

But yeah, I think that's a good point because it actually goes back to the whole idea of this human centered approach. We want employees to feel fulfilled. So I'm not going to mandate Kathy. You have to use AI to write it because you're going to save two hours if Kathy decides that she's going to get more fulfillment out of actually doing the work. Do it. Like, same thing with the Exec AI newsletter I send every weekend I write. 100% of that AI is no involvement in it and I don't want it to be involved. Like, I will take the two to three hours every week and write that thing myself because I just feel like it is what the audience expects and it's what I want to create and it makes me feel good when I finish it. So, yeah, it's just like, again, you have to live the principles you create. If you want to be a human centered company, you have to enable that to happen, which means giving your people the freedom to choose when they actually want to do the task. Even if AI can do it, it's actually better for them to be the one that executes it.

41:27

Speaker B

Yep, number 14. How transparent should companies be when using AI in their customer experiences? That's the.

42:24

Speaker A

I don't know. That's a good question. I mean, I generally just err on the side of transparency overall. But I also think that sometimes people just don't care. Like, if the expectation is I just want the answer, I don't care if it's coming from your knowledge base or some, you know, chat bot is giving me the answer. Like, if I just want to know how to get logged in, or I just want to know how to get the refund, or I just want to know, whatever it is, how to book my flight, how to get a hotel at the event, they just want the answer. There's no expectation that a human is on the other and giving the answer. Yes, if. Let's say, and this is, I don't want to make like a sad example, but let's say that the policies for our event don't cover bereavement, which I assume they do. But let's say, like, you just lost someone and it's five days before the event and you can't be there. You don't want to talk to a chatbot. Like, you want to be able to have a human on the other end who has empathy, who's like, listen, don't worry about it. We will help you out. We'll figure this out. I expect a human on the other end in that case. And so I think as long as you again, solve for when is the expectation that I'm going to talk to a human and then you have to meet that expectation and when it's not, and it's just information gathering or quick answers to common questions, then you don't need to say it's REI chatbot answering you every time. It's just like, I don't care, I just want the answers. So I think it's a mix between what are the expectations of the people that are interacting, what your customers are having, and what is the needed level of human involvement. And then if it's a gray area where it's like, is this really their CEO sending this message? Sometimes you need to just err on the side of transparency. If you're not being authentic, that's okay. Just make it clear that that's what's happening.

42:34

Speaker B

Right? And we're in the process right now of doing that. 50 plus percent of the inquiries we get are something that is like a one sentence, one link answer and people just want to come on there. Are you still accepting speaker submissions? When is the Event. Although is it virtual? Is there a virtual option? All of those things, it's like they don't need to talk to anybody, they just want a quick answer. But if there's a bigger issue, they always can say agent or human or whatever and it'll come to one of us. But still right now it's still just us.

44:18

Speaker A

There's a lot of humans on the back end. Still interesting. Yeah, I will say episode 167, I think it was, we let off with the conversation about AI avatars specifically related to CEOs. Or maybe it was like main topic three, I think, but we had this kind of debate about this authenticity and what is the expectation and would you use AI avatars if you were creating courses for your customers and things like that? There is no right answer here. I think there's this very expansive gray area right now where brands and leaders are having to make their own choices about what's right for their organization or for their personal brand. But again, I think the default should be what is the expectation of the customer here? Are they expecting a human or not? And if it's not a human, would the expectation be that you would be transparent about that you don't want to ever feel like you're hiding something from them?

44:48

Speaker B

Correct.

45:39

Speaker A

And pretending to be something it's not?

45:39

Speaker B

Exactly. Yeah. And back to your like just use case model you talk about all the time. It's like working with Noah. What are all of the things that they are, you know, that he is seeing on his end? As far as the questions that are being asked, what's that whole process? Where is the human? Where can AI, where can any technology assist? It's like breaking those down into places where AI can jump in and places where AI either can't or we don't want it to be.

45:41

Speaker A

Yeah.

46:05

Speaker B

Okay, number 15, what's the trade off between using safe enterprise ready models versus open and uncensored models? And where should companies draw the line between innovation and risk?

46:08

Speaker A

The risk tolerance is going to be different by every company. It's going to be subjective to your company, your industry, your leadership. The trade off on the enterprise ready models, depending on what that means for your company, could be less features, less capabilities, I often say, like neutered versions of the full models. So again, the example I used earlier of ChatGPT team or Enterprise, if you go direct, may have more of a feature set than a controlled version of Microsoft Copilot within an enterprise. And so that's not going to be all the time, but that is an example, and then the open models, you know, sometimes it'll give companies more confidence to use them in more innovative ways where they're more comfortable putting data in because they know the data is not going back to one of, like the three big companies, not going to be training anybody's future models, that kind of stuff. So these are the things where, you know, the technology people, the legal people, the IT people. This, this is why they get paid. The money they get paid is to, like, figure this stuff out. You know, look at the risk management profile. Balance that with the need for innovation. And this is why you also want to have an AI council that has multiple voices within it. You can't just let it make the decisions around this stuff. You need departmental leaders, like a chief marketing officer, for example. Like, those people need to have a voice in this because they're the ones who are going to understand business cases and use cases better than the IT people are. And so you cannot treat this as a technology problem. This is a business transformation opportunity. And so you have to always be balancing that innovation and risk. And my opinion is we often need to err on the side of innovation or else you're just going to get run over by the competitors who are willing to kind of take more risk. Again, not universal. Doesn't mean every company should think that way. But I think too many companies get caught up in the security and privacy risk stuff that's pushed by IT and legal, and they take too long to move to the innovation side. And I think that's maybe the greatest risk is obsolescence from not moving fast enough.

46:20

Speaker B

Yep. And if you are coming to Macon, I would find Chris Penn in the hallway and ask him question. He would nerd out on this question with you. Number 16. Given the challenges, changes and harms technology has already caused in human relationships and connection, what uniquely human qualities should people focus on to be successful and happy in this new reality?

48:36

Speaker A

Oh, boy. So the human relationship and connection stuff, you know, AI is probably going to be helpful and harmful in that environment. You know, we've talked quite a bit on the podcast recently about how reliant people are becoming on AI assistance as companions, friends, mentors. Like, it's, you know, it's a common use to talk to these things and develop, you know, quote unquote relationships with your AI assistants, especially like voice assistants is becoming a bigger thing. So we always have to be aware of the harms in terms of unique human qualities that people should focus on to be successful and happy. I don't, I Don't know that that's changing. You know, I. I guess I could think about, like, my own kids at 12 and 13 and like, you know, how the kinds of things I'm trying to instill in them, like, I don't. They're not allowed on social media yet, and I don't foresee a very near future where they will be. You know, I think that social media can be beneficial when used properly, but there's just so many downsides and there's so much negativity there. You know, I think people just need to generally, you know, stay positive, stay focused, be curious, be willing to experiment, be able to find fulfillment in the work they do. Like, I don't know. I mean, this gets kind of philosophical, I would say, in a way, but I think the things that have always made us happy and successful don't change. You just have to adapt and figure out how to do it, given this kind of technology. And I think one of the biggest questions probably goes back to what we talked about earlier, is just because AI can do something you do doesn't mean you should let it do the thing you do. Like, if that's where your fulfillment and joy and your work comes from. Like, my wife is an artist. Like, I wouldn't say, hey, you should use ChatGPT all the time to do your art, or like, Nano Banana to do your art. Like, if. If you find fulfillment in the art, like, it shouldn't replace what you do. You should find ways to enhance what you do as an artist. And I think that carries over into every profession. It's kind of the basis for my mekong keynote, the Move 37 moment for knowledge workers. The whole premise is AI will be as good or better than all of us at what we do. At some point, we will all have a moment where we realize the AI is superhuman, at the thing that made us feel fulfilled. And that's going to be a very weird reality for people to live within. And that's part of what I'm trying to do with my keynote is look at this reality that this is where we're going to be and say, okay, but it can be amazing if we approach this the right way. So, I don't know. I mean, optimism, empathy, curiosity, intrinsic motivation, like, I think those things still matter significantly and I think they're still relatively unique to humans.

48:57

Speaker B

Yeah, I think that's the key point of doing the right thing. It sounds, like, so obvious, but I think it needs to stay at the forefront of everything that we're doing right now?

52:00

Speaker A

Yeah.

52:09

Speaker B

Number 17, let's talk about education. We get asked a lot about AI's impact on learning, what students need to be learning, what educators need to be teaching. How have your thoughts changed or evolved over the last 12 months?

52:11

Speaker A

I don't know that they've changed dramatically over the last 12 months. I think the technology has evolved. So the example I gave earlier of guided learning, I used to do that through a prompt. So when I needed to help my kids with homework, I would say as the starting prompt, I am helping my seventh grade daughter with this homework. Don't give us answers. I want you to teach us how to solve this. Now I don't have to because now the guided learning is there. NotebookLM, which I mentioned, Google is aggressively building features into that that is meant to help people accelerate their learning. Things like quizzes and video overviews, the ability to create whatever kind of report or training tool you want right within the platform. So I think the technology is moving now to be able to help. The problem is most educators and administrators are unaware of that technology or have not figured out how to teach it into their system. So I. You're closer to our community than I am, Kathy, in terms of like daily interaction with the people within our community. I know we have an incredible base of professors and administrators, especially at the high school and college levels. And I know there are a bunch of them doing incredible things that are working the best they can within the systems they're confined to to try and innovate and bring these things in real time. I just don't think enough of that is happening. Like I've thought myself about, you know, trying to find five hours to put together a deck on guided learning capabilities and then go to my kids school and, and just go to the administrators, say like here I, I will teach you how to do this. Like this is an asset right now to help kids. And in your guidelines it's cheating. And that is, it is the opposite of cheating. It is truly personalized learning. That's not okay. Like we can't go for a full school year without taking advantage of these capabilities. So the thing that has changed is the technology is, is moving quickly to allow personalized learning. Schools aren't keeping up with that technology, but that is not new. Technology always moves faster than the schools can keep up.

52:24

Speaker B

I think a lot of the higher ed folks that we have in our community that have bonded together and there's this amazing group that they're planning a whole thing at Macon on their own and it's a Lot of the administrators or marketers from those institutions who are the ones coming and they're going to their professors and educators saying, look at what I can do in my role. You need to be teaching all of this to your students. So the connection between the administration and educators is hopefully getting closer. Number 18, how do you think brands can protect their voice when people have all of these AI tools?

54:41

Speaker A

I don't know if we're talking about voice, like personal voice, of like the CEO's voice, or like the brand voice overall or like their, their impact maybe they have through the content they create, that kind of stuff. So I'll just answer this kind of broadly and then Kathy, if you have anything to add on this one, definitely jump in here. I go back to the idea of authenticity. I think that whether it's individual leaders or the brands overall, we have to really think about how do we remain authentic in the content we create, maintain kind of that brand voice that lives through everything we do, whether it's audio with podcasting or video, the text we write. We can't let the language models just replace that. The authenticity comes from the individuals within a company, the experiences, the consumer brand experiences. And I deeply believe that humans have to play a key role in all of that. Anybody can start a company and have ChatGPT write the plan and create the brand guidelines and write the emails and do all the things. But it goes back like Kathy writing an email for Macon. Well, Kathy has been instrumental for six years. She knows thousands of our community members like when she writes the email. There is something much deeper to that email than ChatGPT just writing an email based on its training data to the average user. Maybe you look at it and if you put things side by side, it's like it's hard to be, to tell was this an AI model or was it Kathy? But our belief is the human element is, is a distinguishing factor. And especially when you start to get into these, you know, really knowing the people behind your community. And so I think you just can't lose sight of it. It goes again, goes back to this whole idea of like human centered approach to everything and empowering your people to make the choices when AI is right and when the human component is right. And I think brands that just take the efficiency and productivity gains and, and push the human part aside, I think in the end they lose. You know, there is going to be short term gains and things like that. But I do think that the more authentic, more human brands, you know, in the long run are, are the Right. Play.

55:19

Speaker B

Great. Number 19. What AI advances and opportunities have the SmartRx team most excited and what about the most frustrated?

57:52

Speaker A

I talk a lot about deep research and the reasoning models. In particular, the ability for these things to build their own plans and do some longer horizon tasks like research reports. It's the thing I'm probably still most excited about more than I am like agentic AI at this point. I know that's kind of like the stock answer for most people is like agents and agentic AI. I'm just more bullish on the reasoning models and things like deep research because they're here and now and they're, they're pretty advanced in their capabilities. I also find them frustrating because I could come up with 10 research projects I would love to run right now and Gemini or ChatGPT could do 30 page reports for every one of them. But I can't do anything with those things until I verify the citations and read them myself and do all the edits. So you have this infinite ability to create all this research and conduct all this research. You have a finite ability still to actually verify, apply your own level of thinking to them. And so there's this part of me that's like, I just, I wish I had more time to experiment with these models and create things for people to create more value for our audience. But I'm limited by the human capacity to do it. And in the end I actually think that's a good thing. It forces us to not let the AI take over because the human has to be there to get the true value and authentic outputs from these models. So yeah, deep research is kind of like my current favorite thing. And it's also very frustrating to me because I'm trying to solve how to scale it.

58:01

Speaker B

Right. Okay, last but not least, number 20, as you've been putting the final touches on the agenda, what session at Macon are you most looking forward to? Based on the SmartRx roadmap for next.

59:37

Speaker A

Year, I think I've mentioned my own keynote. Here is the one I've been most excited about. I've actually been thinking about this talk for years, since probably the first time I watched AlphaGo, the documentary. So I'm personally most excited to create and give that presentation. In terms of the other sessions, I'm excited to watch. It would be hard to pick, honestly between the main stage sessions. They're incredible breakouts too. I've just been closest to the identification of speakers and the build out of the agenda for the main stage. So yeah, I don't Know, like one of the early ones that I targeted to have was this human side of AI inside the leading labs where I wanted to get people who are taking a human centered approach to the creation and training of these models to come and tell the story of the people behind the models, that it's not all technology. And so you know, the fact that we were able to get people from Google, DeepMind and Metta and then you know, the moderator who's got a background from Meta, Google and Anthropic, I'm really excited about that one. I have an amazing conversation planned for Dr. Brian Keating. The closing keynote, the reimagining what's possible one, there's an AI filmmaking. I don't know. They're just literally all nine of them that we've announced so far. I am excited for and I hope that people find tremendous value in every one of them. Like my goal with the main stage was each one of them on their own should be worth the price of admission. Like that you should take away, even if just one to three things you take away from the talk, they should be on topics and from people that leave like a lasting impact on you. And so that's kind of my, my goal for the main stage.

59:48

Speaker B

Yeah. On the human side of AI I'm excited about just because the topic is of interest to me and it's three women.

1:01:28

Speaker A

Yes.

1:01:33

Speaker B

Which makes me very happy.

1:01:33

Speaker A

Yeah, for sure.

1:01:34

Speaker B

All right, we are done with our 20 questions for this week, AI answers.

1:01:37

Speaker A

We will be back. We've got our weekly coming up as usual and then our next AI Answers will be a Scaling AI series. So we've got Scaling AI coming up in October will be the next free class. We'll put a link in the show notes, you can check that out. And again, we do intro to AI and scaling AI every month. You can always go to the SmartRx site and register for the upcoming one. And again, thank you to Google Cloud for being our presenting partner on the AI Answers podcast series. Thank you, Kathy and Claire. Thanks Claire for organizing everything. We'll talk to you all again next time. Thanks for listening to AI answers. To keep learning, visit SmarterX AI where you'll find on demand courses, upcoming classes and practical resources to guide your AI journey. And if you've got a question for a future episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions about AI.

1:01:40