The Artificial Intelligence Show

#185: AI Answers - Getting Started with AI, Core AI Concepts, In-Demand AI Jobs, Data Cleanliness & AI Fact-Checking

56 min
Dec 11, 20254 months ago
Listen to Episode
Summary

Episode 185 of The Artificial Intelligence Show features Paul Raetzer and Kathy McPhillips answering 13 audience questions from their Intro to AI and Scaling AI classes. Topics range from in-demand AI jobs for non-coders to workforce reduction concerns, data cleanliness, AI fact-checking, and the future of AI advancement including recursive self-improvement and on-device model deployment.

Insights
  • AI jobs are emerging not as new titles but as AI-enhanced versions of existing roles (customer success, sales, marketing) requiring AI literacy as a baseline competency
  • Generative AI use cases should be prioritized over predictive ones for quick wins, but domain expertise is essential for data-dependent projects to avoid hallucinations and errors
  • Workforce reduction is inevitable without concurrent innovation and growth; companies must redistribute AI-freed capacity into new value-creation work, not just cut headcount
  • On-device AI deployment (Apple's likely strategy) could commoditize frontier models and shift competitive advantage from proprietary model development to distribution channels
  • Recursive self-improvement represents the next frontier but lacks guardrails; advancement depends on voluntary lab collaboration rather than regulation
Trends
AI literacy becoming baseline requirement across all knowledge work roles, not specialized skillShift from AI replacement fear to augmentation mindset through targeted use case identification and employee engagementEnterprise focus on generative AI quick wins before tackling complex predictive analytics requiring data infrastructure investmentWorkforce reduction pressure mounting in Q1-Q2 2025 as companies pursue efficiency gains without corresponding growth initiativesOn-device model compression emerging as competitive battleground between cloud-dependent labs and device manufacturersRecursive self-improvement and autonomous agents becoming primary focus for frontier AI labs in 2026Regulatory fragmentation at state level creating compliance complexity without federal AI governance frameworkMarketing and content creation roles increasingly requiring AI verification and fact-checking capabilitiesAI orchestration and workflow optimization emerging as high-value non-coding career pathsIndustry council formation (Marketing AI Industry Council) indicating need for cross-company standards and best practices
Topics
In-Demand AI Jobs for Non-CodersAI Literacy and Foundational TrainingGenerative vs. Predictive AI Use CasesData Cleanliness and Legacy Data IntegrationAI Fact-Checking and Output VerificationWorkforce Reduction and Job DisplacementAI Agents and OrchestrationOn-Device AI Model DeploymentRecursive Self-Improvement and SafetyAI Regulation and GovernanceProblem-Based and Use Case-Based AI RoadmapsChange Management and Adoption FearAI Search vs. Traditional SearchReasoning Models and Inference TimeEnterprise AI Tool Implementation
Companies
Google Cloud
Presenting sponsor of AI Answers series, Intro to AI, and Scaling AI classes; partnership on AI literacy initiatives
Google
Discussed as dominant search player adapting to AI-driven search; building reasoning models and Gemini; on-device dep...
OpenAI
Frontier AI lab building ChatGPT; discussed as cash-constrained compared to Google; collaborating on agent standards ...
Anthropic
Frontier AI lab building Claude; collaborating on agent standards framework; mentioned for safety-focused approach
Apple
Discussed as lagging in AI (Siri failures); strategy to compress frontier models for on-device deployment within 1-2 ...
Stripe
Referenced as prior employer of Gene DeWitt, who optimized SDR team from 10 to 1 person using AI agent orchestration
Vercel
Current employer of Gene DeWitt, go-to-market expert featured on Lenny's Podcast discussing AI agent workflow optimiz...
Nvidia
Critical to AI compute supply chain; potential bottleneck for AI advancement if supply chain breaks down
TSMC
Chip manufacturer partnering with Nvidia; critical infrastructure for AI compute supply chain
Cleveland Clinic
Healthcare organization implementing AI; referenced for potential to advance innovation beyond current capabilities
SmartRx
Paul Raetzer's company; building AI Academy, hosting Intro to AI and Scaling AI classes, forming Marketing AI Industr...
Tesla
Integrates Grok AI assistant into vehicles; mentioned as example of on-device AI experimentation
xAI
Elon Musk's AI lab; discussed as potential competitor in recursive self-improvement race without same safety guardrails
People
Paul Raetzer
Founder and CEO of SmartRx and Marketing AI Institute; host of The Artificial Intelligence Show and AI Answers series
Kathy McPhillips
Chief Marketing Officer at SmartRx; co-host of AI Answers series and monthly Intro to AI and Scaling AI classes
Gene DeWitt
Go-to-market expert from Stripe/Vercel; featured on Lenny's Podcast for optimizing SDR workflows with AI agent orches...
Gavin Baker
Podcast guest discussing gross margins of different business models; influenced discussion on accounting categorization
Amanda Todorovich
Cleveland Clinic executive; interviewed at AI for Agency Summit about AI implementation and innovation opportunities
Elon Musk
CEO of xAI; discussed as racing to advance AI without same safety guardrails as other frontier labs
Andy Christadena
Expert on AI search and SEO; member of Marketing AI Industry Council providing guidance on search evolution
Will Reynolds
Expert on AI search and SEO; member of Marketing AI Industry Council providing guidance on search evolution
Quotes
"Like we have to accelerate growth and innovation as a society like this, as an economy like this isn't an option. And anyone who tells you this isn't going to lead to workforce reduction, that is not true."
Paul RaetzerOpening
"AI is incapable of doing anyone's job today. I don't care what the knowledge work field is... but it is increasingly good at doing tasks within that job."
Paul RaetzerQuestion 12
"If you just take the AI Fundamentals course series, the piloting course series, and then ideally scaling as well... that is going to put you in the top 1% basically in your field at this point."
Paul RaetzerQuestion 2
"I think more and more it's probably going to be just taking existing roles and infusing AI requirements into them versus changing titles to have AI in them."
Paul RaetzerQuestion 1
"We're going to solve diseases, we're going to go to other planets. We're going to discover things we would have just never discovered scientifically in the next five to 10 years. You know, it's just going to be an amazing time of innovation."
Paul RaetzerClosing
Full Transcript
2 Speakers
Speaker A

Like we have to accelerate growth and innovation as a society like this, as an economy like this isn't an option. And anyone who tells you this isn't going to lead to workforce reduction, that is not true. Welcome to AI Answers, a special Q and A series from the Artificial Intelligence Show. I'm Paul Raitzer, founder and CEO of SmartRx and marketing AI institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating, navigating this fast moving world of AI. But we never have enough time to get to all of them. So we created the AI Answers series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing. Whether you're just starting your AI journey or already putting it to work in your organization, these are the practical insights, use cases and strategies you need to grow smarter. Let's explore AI together. Welcome to episode 185 of the Artificial Intelligence Show. I am your host Paul Raetzer along with my co host Kathy McPhillips, chief marketing officer at SmartRx. This is our 10th episode in our AI answers series presented by Google Cloud. This is our series based on questions from our monthly Intro to AI and Scaling AI classes along with some of our virtual events. So if you're not familiar with those, we do Kathy and I together do class each month, a free class intro day I and then we do a scaling AI class. The intro one has been going on since fall of 2021. So this is actually we just did the 53rd one. So that is the questions we're going to go through today are extracted from our audience from that 53rd edition of Intro to AI and then scaling AI. Kathy, we are on 1340.

0:00

Speaker B

We're doing number 13 on Friday.

1:51

Speaker A

13 on Friday. So this is dropping on December 11th. If you're listening to it on December 11th. Join us on Friday, December 12th. I'm getting my dates right here you are for scaling AI13. So again, intraday is like the fundamentals everyone needs to know. Every knowledge worker, every leader. Scaling AI is sort of the five steps that leaders should be taking to scale AI within their organizations. So we do this AI Answer series as well as those intro classes and the Scaling AI classes as part of our partnership with Google Cloud. So they are presenting sponsor for those as part of our overall AI literacy project, we're trying to accelerate AI literacy. We have this incredible partnership with the Google Cloud marketing team for more than a year now. We've been teaming up on AI Answers podcast series as well as intro scaling a series of AI blueprints that we're going to be launching here in the next month or so. And then we have this incredible Marketing AI Industry Council that you're going to be hearing a lot more about. We have our first report coming out of that council also probably in the next month or so here. So you can learn more about google cloud@cloud.google.com and also check out Google Workspace. We talked about the Workspace Studio on actually episode 184 this week. So if you listen to the weekly episode, we talked about Google Workspace where you can go in and build your own agents within Workspace. It's cool. It's working for me now. So I said on the podcast it wasn't working. It's working. I'm really excited about that as well. So okay with that. I'll let Kathy kind of go over how this is going to work and where these questions come from. But again, Tuesdays are our weekly episodes. If you're a weekly listener that is still going on, this is a bonus episode this week tied to our AI Answers series.

1:53

Speaker B

Great. Well, I'll keep this short. We have some really, really good questions. I want to jump right into the questions, but essentially we take your questions from the show that weren't answers, even some that were answered that are really good that we turned into a podcast episode. So this is our 10th time doing this and we should have one next week after our scaling class from Friday. So Paul, I'm jumping right in.

3:36

Speaker A

Okay, let's do it.

3:54

Speaker B

All right, number one, what AI positions are in demand for professionals who are not coders? In particular, this person has worked as a business analyst and holds certifications from Google Anthropic and others. How can skill sets be presented to hiring managers?

3:55

Speaker A

This is an interesting one. You know, we're definitely starting to see roles emerge. You know, I've mentioned numerous times we're in the process of, of kind of building out our organizational design at SmartRx and starting to think what those roles are going to be. In some cases, they are the roles we've always had, but with AI capabilities baked into them. So we're still hiring customer success managers, we're hiring account executives in sales. But they have to be AI forward. Like there's no question. They have to be able to work with AI assistants. They have to be able to build custom GPTs and Google gems. Like we need that from the ground up. And so that's how we use our own AI academy is to train our own people and then we basically, you know, enable other people to do the same thing. So I think first and foremost it is a layering of capabilities onto existing roles. Sometimes you will see AI also then dropped into that. The first one, that kind of the first domino I saw happening, and this goes back to late last year and certainly into early this year, is more of an AIOps role. So it's someone who's looking at understands business cases, understands workflows, can work across departments, oftentimes marketing, sales, customer service, and can help them identify ways to build smarter processes and fuse AI technology. So they have this innate capability of understanding what AI is capable of and then understanding business cases and workflows and looking for ways to infuse AI to drive efficiency, productivity, performance. So AIOps is definitely one. We're starting to look at things like bringing in people who can work with AI to create outputs, but then provide the human verification and enhancement to it. So, like, research is an area in particular, I'm kind of very bullish on where you can use products like Google Deep Research and you can create five reports a day if you want, and they can be 40 pages long, you can do all this work. But a human who knows what they're doing, who is a researcher by trade or a content creator by trade, journalist by trade, can go in and actually figure out, is this any good, are their citations legitimate, things like that. So we're starting to look at layers like that. You know, I think there's going to be people who oversee AI agents. Like, there's going to be roles that are literally just, hey, we've got 10 AI agents working in the sales team. Someone needs to orchestrate those. So an AI orchestration manager or something like that, like, and so again, I see a lot of theories of what these roles are going to be. I don't see a lot of titles yet. And I think part of it is just like, it's hard to put a job out there that no one really understands when people are searching for specific things they've done in their career. Like, I'm trying to find roles in sales or marketing or customer success. So I think more and more it's probably going to be just taking existing roles and infusing AI requirements into them versus changing titles to have AI in them. That's my current kind of best guess at what happens.

4:09

Speaker B

Sure, I listened to that Lenny's podcast on the way home on the drive in today and it's like, oh my Gosh, I need to be sitting at my desk taking a million notes.

7:02

Speaker A

Yeah.

7:11

Speaker B

It was so valuable.

7:12

Speaker A

Yeah. What Kathy's referring to Gene DeWitt. Is that the lady's name? So amazing. She's a vercel. She. She came from Stripe and. And Lenny, Lenny's podcast, the name of the podcast, and I think I touched on it on the episode this week. But basically she's a go to market expertise and it's just like, amazing. And she tells this story about how they had a single engineer in six weeks that took their SDR team from 10 to 1 by just orchestrating AI agents into the workflows and the motions that are going on within the go to market team. And I think, I just think that's what's going to happen. So that's where you've got this like, AI Ops person who in that case was an engineer, who then just like comes in, is like, I can build stuff and analyze workflow. So I do think AI will be infused into titles. I just, at this point, point, think more and more. It's going to be a requirement of whatever your title is. And so if you're someone who's going out and building these capabilities, getting these certificate certifications, and you're in an organization that doesn't already recognize the value of that, I think you just have to figure out a way to demonstrate that through business cases, through, like, forecasting, like, hey, based on what I've learned with these certificates, here's how I think we could be making improvements to workflows, efficiencies, productivity, things like that. So you have to be proactive and connecting the dots for your leaders of what the value of those are. And if they don't appreciate it, there's going to be a market for that talent. Like, sure.

7:12

Speaker B

I think one of the things I liked best about that is, like, when she said we went from 10 to 1, I kind of like did that gasp of like, oh, gosh. And then she said, and now they're all doing outbound. I'm like, oh, good.

8:34

Speaker A

You're.

8:43

Speaker B

They're doing other things. They're doing things they want to be doing. More value to the company.

8:44

Speaker A

Yep.

8:47

Speaker B

Okay. Number two, I'm working to educate professional communicators, including marketing and pr, through resources and training. What are the top concepts that organizations, transactional communicators need to know.

8:49

Speaker A

So again, I kind of think about this one, like I was referring to in this first one, where it's just layering that AI competency and comprehension onto what you Already do. So I think if you have just the fundamentals, like when I, when I created my AI Foundations collection for our AI Academy, when we relaunched AI Academy this summer and fall, AI Foundations was like the one where I was like, if you don't take anything else, we have nine certificate courses in our academy right now and it keeps going up each month. But the foundations was, was fundamentals, piloting and scaling. So my feeling was, regardless of your role, regardless of how many years of experience you have, if you just take the AI Fundamentals course series, the piloting course series, and then ideally scaling as well, but just those two to start, and then you layer that over what you do, that that's enough. Like, that is so far ahead of most professionals at this point. So whether you're in communications, marketing, pr, sales, customer service management, whatever it is, if you just have a really, really strong foundational base of understanding what AI is, what it's capable of, how to look at business problems differently, how to identify use cases, how to help your coworkers identify use cases, that is, that is going to put you in the top 1% basically in your field at this point. That'll change in the one to two years ahead when everyone else kind of has to figure this stuff out. But if you're there and you're in the communications field, I can promise you like a minimum top 5% in your field, but probably top 1% based on what we've seen.

9:01

Speaker B

Yeah, for sure. And I think that's just, you know, a year ago, two years ago, it was like, oh, no, you're, you know, marketers. That saying, you know, you won't be replaced by AI, you'll pay for marketers who know AI. And that's kind of evolved and things are going to keep evolving. So I always implore people, stay up on your training, keep listening to the podcast. As we know more and learn more and share more, we can kind of, you know, speed up some of these things for you.

10:33

Speaker A

Yep.

10:55

Speaker B

Number three, as someone who isn't in the knowledge work field, I don't have any tasks in my job that I can apply AI to. I'm taking courses and learning as much as I can, but I'm not sure what I should focus on in AI.

10:56

Speaker A

So I'm not sure what this person does. The knowledge work field is quite broad. If the way I explain knowledge work is if you use a computer to do your job, you're. You are in knowledge work. So if you think, create for a living, apply reasoning to problem solving like you are a knowledge worker, the non knowledge work audience would be more of the labor crowd. But in the United States, I mean it's like roughly 100 million out of 136 million full time jobs are considered knowledge workers. Now that being said, even if you are in the trades, for example, you can still be using this in your personal life. You can still be applying it to coaching your kids through school. You can be doing it for things like travel, your own financial planning, things like that. Like anywhere again where you're thinking or creating in your personal life, you can do it. But I, again, even in the trades I still see it like I have, I have family members who, I have a family member who runs a laborers union. And so like I think about this all the time. And so I think like even then you may have people who are using their hands every day, like that's what they're doing. Or maybe it's firemen, policemen, things like that. But we all have like administrative tasks that we have to do as part of that as well. And so one, I would say like zoom out and say maybe you actually do part of what you're, you're doing is knowledge work in a way. Or you can find ways to improve the things your organization is doing by raising your hand and saying, hey, I've been learning about this AI stuff. Like I get that I'm out in the field all day or I'm on the manufacturing line all day or whatever it is you're doing. But like, have we thought about trying these things and like you could go and just experiment and, and maybe develop a plan or like come up with a way to do something better on the manufacturing line. Like again, I, I think they're so universally valuable and if nothing else, try it in your personal life. Get really good at it. I love that this person is taking courses and learning because I don't know, I mean, maybe at some point there's a career field shift too. Maybe it, maybe the future is in knowledge work because you see the potential of these things and maybe it's in your industry there's a smarter way to do something and you got to step into the knowledge work world to do it. I don't know. It's a really good question though. Actually I've never gotten that question before, so I really like that one.

11:08

Speaker B

Yeah. Number four, I'm ready to leave my current job. I've considered building agents and governing agents and developing GPTs, but given that I don't know much beyond the industry, I'm not Sure. How I can help. What do you think would be a good area to focus on as someone trying to break into the AI industry?

13:21

Speaker A

So if you're going to leave your current job, I don't know if that means to go start your own thing, you know, start building some agents and you know, GPTs to, to start helping other people more in like a consulting role. So I, you know, again, I think like anything in life, like I'm working with my son, one of his friends right now on their pitch competition. So in seventh grade they do a pitch competition and I built this startup buddy to like help. I've been advising this seventh grade class for like five years now. So I've done this before with my daughter and now my son. And so like I, in the last week I've been just sitting there talking with him about problem identification. Like you're, you're going to build anything. Like any idea whether it's a startup business you want to go run or you want to pitch yourself to another company to like move into a role. Anything where you want to create that demand for something, you have to figure out, okay, is there a market for the thing I'm good at? What is the problem I'm solving? Is the thing I'm bringing to the table better than what's out there? And so I think I would almost look at it like you were making a pitch deck for the knowledge and capabilities you have. And whether that's starting your own business or moving into a company where you're going to bring those capabilities in. Think about that, like, what is the problem I'm solving by having these capabilities? Is there that need for it? What is the market potential of this? Like, can I increase the, the value of this company? Can I drive more leads to this company? So I don't know, maybe I'd put myself in that entrepreneurial framework of like, what are the elements of a good pitch deck? And like think about applying those skills and putting into that, that mode. And then again, whether you jump into another organization and another corporate job or non profit field, or you're trying to build something like a consulting business, think about problem solution, market potential, competition and then how you're going to kind of differentiate and get out there.

13:38

Speaker B

Yeah, I did that when we were looking at, you know, rebranding Smartr X, you know, back in the spring or you know, and it was just like I had to go through that whole idea of like, maybe it was my agency days and doing all of that, but laying it out in that format. Was super valuable for me to figure out where the holes were and where our competitive advantage was and everything.

15:21

Speaker A

Yep, super helpful. Yeah. And talk, talk to the AI about it. I mean, it's such a good question. The context that's missing for us to answer this one fully is like, what is that career path you want? And so if you take this and say, hey, here's the knowledge I have, if the course I've gone through is the capabilities I have in AI, I'd love to be in the healthcare field. I want to spend my time doing X, Y and Z. Help me think about what that next career path move might be and like how I can do it and just like work with a, you know, Google Gemini or a chat GPT and like play that out, have, have that kind of career advice, build a career advisor GPT, you know, that kind of thing.

15:37

Speaker B

Number five, I lead aftermarket sales in the industrial manufacturing space. We have extensive legacy data install base and service logs, but it's disorganized. For a company just starting, would you recommend prioritizing generative use cases such as writing emails and content, or predictive use cases such as forecasting churn spare parts, needs, et cetera, to achieve the quickest win?

16:15

Speaker A

Yeah, I'm guessing based on just the format of this question, this person already knows the answer to this, which is degenerative AI use cases is the most obvious thing. So if you're not doing that yet, that is, you can be starting this afternoon and if you have this knowledge, you can be like, you know, getting permission to go meet with the marketing team, the sales team, maybe you're running like a half day workshop and you're helping people just find ways to use Google Gemini for 20 bucks a month to infuse it into what they're doing and you get those immediate efficiency, productivity gains. If your data is messy, you likely, unless you're a data analyst or like, you know, you work in IT or computer science something, you're not solving that without other expertise and a lot of pain. I mean, we obviously are a more advanced organization than most and yet we still battle ourselves with data. Like it's just never as clean as you want it to be, never as organized, it's never in the right places. And so to get into more of the predictive side, that maybe requires more of that advanced data analysis and data cleansing and things like that, that it's just going to take longer and it's not going to be a straight line. And so I would, I would, I would recommend you Start moving there. You start figuring out who are the stakeholders that need to be involved, what are all the data sources, like who's going to own this project? And you create this long term vision which could be like a one to three year. What I'm saying long term, hey, we'd love to be in this place where our data is like, this is what we did. It's my rex. Like we want, this is our ideal user story for the marketing team. Like we want them to have this data so they can make these decisions, so they can move faster. Same in sales, we want them to have this. And now what does it take to get there? But when the generative AI phase, it's literally just go get Google Gemini and turn it on and then teach people like 1, 2, 3, use cases that address a good percentage of what they do every month. And you get these immediate gains with very little training. It's just teach them how to talk to the AI assistant. So pursue both paths. But definitely generative AI use cases is the much faster path to immediate value.

16:36

Speaker B

Yep, that kind of segues into number six. We're sitting on decades of historical service data that isn't perfectly clean, but we want to use AI to unlock value from it. What is the most innovative way to get started? Do we need a certain level of this data hygiene first, or can AI help clean and organize this data as we go? And I'm thinking about, you know, like our knowledge base, our chatbot things that we're setting up right now with some of that historical data.

18:45

Speaker A

Again, it can definitely help. But as with any advanced use case of AI, you have to have expertise in the domain with which you're working with it. So if you are not a data scientist and you don't know what good data looks like, you can talk to the AI all day and it may deliver a perfect outcome or output. And you aren't the right person to judge that because in data, especially if you're going to be using it for important business decisions or motions within the organization in marketing and sales and service and operations and product, a 5% margin of error is a really big margin of error. And so if you don't know to identify that margin of error, you wouldn't know it if you saw it, then you're not the right person to lead this. So you've identified a major issue which is we have this historical data, it could be really valuable to us. You could give that data to Google Gemini and say, I don't know much about this, I'm not sure how to organize this and clean it up. Could you help me? And Gemini will probably say, absolutely, let's go, like, give me access to the Google sheet or upload the CSV file. And it may do things that look super impressive. And you may think you did this in two hours. I was able to do the work of a data scientist. It would have cost $200,000. But you may make a mistake somewhere in there. Or Gemini could make a mistake or have a hallucination that could make the entire thing worthless. And so this is one of those where I always advise, even though AI is capable of these things, bring in the experts. So another example, I sort of a parallel path. I do this with legal stuff now and HR stuff and finance. Those are not. I would not consider myself an expert in any of those things. I've been an entrepreneur for 19 years, 20 years, whatever. I've been running companies, I've done all of it. But I'm not an expert in that stuff. I pay senior advisors to be the experts in those things. Yet I will do a lot of the legwork now. If I'm doing a legal discussion, if I'm talking to my attorney, I will go in and have that conversation first with Google Gemini. I will arrive at what the brief looks like or what the letter I need to send looks like, or what the application form needs to look like. And then I will bring in my attorney and say, okay, here's what I've created so far, but you're the expert. So I think that's what we have to do is like we, we do now have these capabilities to do things we couldn't have done before, that we aren't experts in. But we also have to accept the fact we're not going to know if the output isn't right. And so that's, you know, again, kind of a parallel path to think about.

19:09

Speaker B

So speaking of that, I just was getting ready for some 2026 speaking engagements for you, and I had your contract. So I took 2025, put them in the 2026 folder for the templates, and I was like, I wonder if these need a free an update. So me and AI went through your speaking contracts. It gave me all these recommendations, you know, based on the industry, based on AI today, all of these things. What are some recommendations that I can take to the legal team to say, do these make sense from the output? I looked at them and I said, well, this doesn't make sense for us. This doesn't make sense for us. This one. And I sent them to Tracy and to Ashley and I said, what do you think? They added a few things, sent it off to the legal team. So what time did that say? What things did did the legal team not think about? Because they're not in that speaking world necessarily. So it was a whole process and it was super helpful. So we'll see what they come back with.

21:39

Speaker A

Yeah. And get another just real life example. So literally this morning I dropped my kids off from school. I've got about a 10 minute drive back and I have a Tesla that has Grok baked into it. So you just literally hit the button on the wheel and you can talk to Grok. And so every once in a while I'll experiment with Grok just to see how it's doing. And I was thinking about like gross margins. I was actually listening to a podcast, an amazing podcast with Gavin Baker, and they were talking about gross margins of different businesses. And I was like, oh, you know what? We're like largely an e learning business moving forward, like events and elearning is kind of the two main things we do. And I was like, I wonder if our accounting is actually set up properly from a gross margin perspective to factor in the cost of like course production and learning management system. Like I wonder if it's being categorized correctly in our accounting. And I happen to be having lunch with my accountant tomorrow. And so I was like, oh, let me have this conversation real quick with Grok and then I will say, hey, I think we might not be properly categorizing things to know our true gross margin. But I don't know the actual answer. I would not go into QuickBooks and make the change myself. But now I had this like five minute conversation where I have information I can now take to the expert and say, what do you think? Should we be recategorizing things? And so this is what becomes possible when you understand what AI is capable of and you also understand it's not the end all. Like you still need the expertise.

22:27

Speaker B

Absolutely. Ten minutes. Just turn on some songs, Paul.

23:41

Speaker A

Every once in a while I give myself the luxury of listening to music, but most of the time I cram in as much knowledge gain as I can.

23:45

Speaker B

I know. Okay, number seven. I love this question. We adopted a problem based model for AI to build our AI roadmap and are looking to implement a use case approach with our team. If we are currently onboarding enterprise AI tool and enterprise AI tool for the marketing department. Can you talk about what to be aware of and best practices for sourcing use Cases.

23:53

Speaker A

Okay, so problem based and use case are two frameworks we teach through SmartRx. They're in our 2022 book on marketing artificial intelligence. So you can go read the book. If you're an AI Academy member, you can take courses on these. They're in the actually the piloting AI course series. So we can include some, some links to these. I also teach these in the Intro to AI class, the free Intro to AI class we mentioned at the beginning. So I go through like these basic frameworks but in essence in the problem based model, you're looking at existing challenges, goals, you're falling short of, known pain points in your organization and saying is there a smarter way to solve these? And you can do this with anything. And, and we'll put a link in we have problems GPT will actually help you like brainstorm these things and then develop briefs. So it's a free custom GPT I built that's just available in ChatGPT. And then the use case one. What we often do is take more of a role based approach so you think about jobs. So like Jobs GPT is another custom GPT you can use. We'll put the link in the show notes and so what we'll do is like say okay, you've got the enterprise AI tool for your marketing team. Get them together in a workshop format. You explain how to analyze their job, look at the workflows they perform each day, think about the tasks that go into those jobs and then you try and identify which are the ones that would be most valuable to us to apply AI to which ones can help us do this more. And so that's kind of the approach we take is we basically break jobs down into tasks and then we identify the tasks that are most valuable. So like the example Kathy just gave about doing analysis of contracts. Like no one on our team finds great joy and fulfillment in analyzing legal contracts for speaking engagements. That is like a really good use. Now that is not something Kathy does every day. Like it's as she said, it's like a once a year we'll take a fresh look at these contracts. So she looks is like I'm just going to keep putting that off because I really don't want to do that. Oh wait a second. I could probably save myself a few hours and just have Gemini analyze this for me. So that's like a one, one off thing. But you also may look at it and say I'd love to be delivering an analytics report every Monday morning that looks at lead flow and conversions and projects customer lifetime value and looks at churn. Like, I'd love to have this data every week. Could I find a way to have AI write that report for me? And so when you sit down and go through, like, more of a workshop model where you take the time, the 30 minutes, yourself, whatever, to think through, brainstorm with jobs, GPT or however you want to do it. And again, like, in our course, we actually offer like a download that like, gives you a template to do these things, but to do that, but then share the ideas with everybody else. So that's why the workshop model's so good. Soloed think. Then you do like a team or a table think, like you kind of bouncing ideas around and then you share. Hey, here's the three I'm super excited about. I think I could save, you know, 10 hours a week by just doing these three. So that's how I would approach it, is like, do it as a role based and then tie it to tasks and workflows.

24:13

Speaker B

I'm not sure if I heard this on our podcast or something else this week, but it was like, there's this fear people have of letting their colleagues know they might be using AI on something. Like, did you talk about that?

27:06

Speaker A

Yeah, yeah, yeah.

27:19

Speaker B

So it's just like, the more we talk about the ways we're using it, the more people will be willing to say, like, okay, this is okay. I'm still good at my job, I'm still smart, I'm still relevant. But it kind of takes away that, that fear of making, you know.

27:20

Speaker A

Yeah, yeah. And the premise there is, like, within a lot of organizations, again, most people feel like they're completely behind and everyone else has figured this out. That is not the case in many more organizations, if not most. There are a few people like to say on the marketing team, the sales team, the leadership team, the same thing happens in schools. I've heard this from professors and teachers who don't want to admit they're using AI. They don't want other people to know. There's like a negative stigmatism to this. Like, and so by doing it in this embracing workshop model, it's like, hey, we want you all to be using it. We actually need you all to be using it.

27:35

Speaker B

Right?

28:08

Speaker A

Let's learn from each other. Let's develop a center of excellence where we share best practices. But yes, in some organizations, there is definitely still this negative perception of people who are using it. And you have to get through that or your company is going to become obsolete. Like, we don't have that choice anymore.

28:09

Speaker B

Okay, number eight, my team consists of technical experts and field engineers, not marketers. What is the best way to introduce AI tools to a technical industrial workforce without causing replacement fear? As maybe we can just take even that part. How do we frame it as an augmentation of their technical expertise?

28:25

Speaker A

So I, I have the same answer. This, this, you could ask this question like 100 different ways. It's such a, it's such a good question. That's why I think it does come up in all these different ways. But it's basically asking the same thing is when people fear something, how do you get them to embrace it? And the way I always approach this is find the thing in their job they hate doing and build a GPT for them, build a gem for them. Show them a workflow where AI helps them do the thing they don't enjoy. So that's like I go back to, I've shared this story before about like when my daughter was 10 and Dolly came out image generation. She hated it because she's an artist. Like her mom's an artist and she just saw it as this threat and like couldn't stand AI. Didn't even like the fact that I was like working on AI. And so there was this like extended period where I was trying to like bring her along and explain, like, but it can help you in all these other ways. And so I eventually found this like really great use case for her where like the walls came down. It's like, oh well if can help me with that, like that'd be really cool. And so we started there and you just, you find that entry point where like, you know, again I, I mentioned education earlier, like let's say teachers like developing curriculum or coming up with like in class exercises that engage kids who maybe like generally just sit there and zone out like talk to it. Find ways to do that where like you've tried everything. And so I think some of this can be just done in a survey. Like what part of your job don't you find fulfilling? Like what? And you just find the thing and then start there. Just start with one thing. Don't, don't give them a co pilot license or Gemini license or chat GPT license. They go figure it out. They're not going to do it. They don't want to do it. It's a replacement to them. But if you, if you say, hey listen, we're giving you Google Gemini. Here are three use cases we think you'll really enjoy. As a manager who hates doing professional reviews or like writing job descriptions or whatever it is, legal analysis of contracts, you have to go through, find that thing and then just like stack those. And then eventually it's like, okay, like I get how I can really like amplify my capabilities with this.

28:43

Speaker B

Yep, this one's a little bit repetitive, but I think there's a. We, we can talk about trust a little bit from an individual standpoint. But part of me gets hung up on the fact that generative AI is just predicting the next token. I want to trust AI as a real thought partner, but I can't quite get past the idea that it's all ones and zeros stitched together from existing data. What would you say to people like me who are trying to move beyond the mechanics and actually trust the technology enough to make, to use it in meaningful ways?

30:47

Speaker A

Yeah, so I don't know, just context for people who don't understand the question. It's a very good question. The weird thing about Google Gemini and ChatGPT and anthropic Claude and all these models is that is in essence what they're doing is there was a breakthrough back in 2017 that invented something called the transformer. It came out of the Google Brain team. That transformer architecture became the basis for GPT generative pre trained transformer. And the basic premise of that model and what it enabled with the building of large language models, or LLMs, is that it in essence just predicts the next word. So you feed it all this human data from your website, wikipedia, transcripts from YouTube videos, books, it just like takes all this information in and it basically just learns how humans write. And then when you go in and give it a prompt, it in essence just sort of predicts what the most likely best next word is. That is at its most fundamental stage, how these things work. And that's weird. And the, the engineers, the AI researchers themselves, while they understand it better than the normal person, the normal business person, they don't truly actually comprehend why it works. And so to this, you know, listener's question, it's hard to get over that. But it's also like, I don't understand why the speed of light is a thing. Like, why is that the fundamental law that like guides the universe? Like, we can't break that or, you know, why does the universe keep expanding? Like there's, why does gravity exist? Like there's just things that we know are true and we can't explain them. And yet I trust gravity every day. And so I think with AI models we kind of have to get to that point where it's okay to have skepticism that they make mistakes, that's actually probably a really good thing. Yeah. That you're in tune to that.

31:12

Speaker B

Yeah.

33:06

Speaker A

But we also have to accept the fact that for some reason the laws of physics allow these things that are basically made from grains of sand that form chips, that the chips then become put in data centers, and these data centers are given a bunch of data and out comes these things that can make all these crazy predictions. Like, it's the weirdest thing when you actually step back and think about how this all works. I would just sort of get to the point where you just sort of accept that is like a law of physics, a law of nature that allows this to happen. We don't really know why. And they're not perfect, but if you just use them in that way as an ability to augment yourself and continue to have that skepticism and that little bit of doubt, like, I gotta stay in the loop. You're just gonna be so far ahead of everybody else. And as of right now, all the scaling laws tell us that they're just gonna keep getting smarter, the hallucinations will keep going down, and we will have this alien technology everywhere.

33:07

Speaker B

Yeah, but I mean, what we've always said, you know, look at the output, question the output, ask more questions. All of those things will help you get to that point.

34:05

Speaker A

Yeah. I love that question though, by the way.

34:14

Speaker B

I do too.

34:16

Speaker A

That's one of my. One of the better questions I've seen. It's really good.

34:16

Speaker B

Number 10. There's been considerable debate about whether AI driven search tools pose a real threat to traditional search engines. How do you see this playing out? Are Google and the major platforms actually at risk, or are they already adapting in ways that will keep them central to how we find information?

34:20

Speaker A

So six to 12 months ago, there was lots of doubt here. And I don't know that we actually have like a ton of answers yet as to where this goes. It is like one of those big open questions. But recent months, the data is showing that Google is not being impacted dramatically by this. Like their traditional search model, they obviously continue to infuse their AI mode. So right now it's like a secondary window. If you go and do a search, you can click over to AI mode or you can activate AI mode. I think over time, Google search just becomes AI mode. Like it's. They'll probably eventually sunset traditional Google search in some way. But as of right now, Google's business is humming along. And as I mentioned on episode 184 like, part of the leverage Google has in this and the potential to be the one that comes out furthest ahead in the end over OpenAI and others is they are a cash cow business. Like their search and ads business just pumps out money and they can reinvest that money into building bigger, better models and all this stuff where OpenAI is like completely reliant on funding and eventually an IPO and they're just burning through tens of billions of dollars of cash where Google isn't. And so I don't know. I think that it's going to keep evolving. I think marketers and content creators and brands, we have to stay in tune to this and figure this out, how it's going to evolve, especially as AI agents become more reliable. And it's actually like my AI agent that's coming to your website or coming to your e commerce site and buying things or gathering information and not a human. There's just so many unknowns. And I mentioned at the beginning that we have this marketing AI Industry Council we've formed with Google Cloud and this is one of the areas that we've identified as like, we got to. There's like 15 open questions we basically have as the council and we focused on AI talent first. Like the impact is going to have a talent. But these are the kinds of things that we're starting to ask. And we have a couple of people on the council already who are experts in this area and we lean on them for this kind of guidance. Andy Christadena and Will Reynolds, two people that come to mind that we follow when it comes to these kinds of topics.

34:36

Speaker B

Yeah. Okay, number 11. As generative AI matures, what's the next significant shift? Is it AI that can run directly on our devices, doing its thinking locally instead of relying on the cloud, or is there another evolution coming that we should be paying attention to?

36:43

Speaker A

Man, it's so funny. Like, go back a year ago, go back to last December, and if you listen to the AI Answers podcast, these questions are so much more advanced than.

36:58

Speaker B

We would have been. Getting an intro class.

37:07

Speaker A

Yeah, that's wild. Super smart questions. Okay, so what comes next? There's two main things I would watch for in 2026. Reasoning continues to get really, really good. And that is the ability for the AI assistant to take time at the point of inference is what it's called when you and I use Google Gemini or ChatGPT or Anthropic Claude, when we go in and we ask it a question, we give it a prompt to Build an image to create a video, to analyze data, to build a strategy doc, to write an email. Inference is the time when we ask it and then how much time it takes to think about what is being asked. And so up until fall of 2024, we just had answer engines based on information retrieval or like things it learned in its training. So you ask a question, it instantly responds. It's like one second it starts going now since late 2024, you'll see it thinking and you'll, it'll literally tell you in the AI system like thinking, thinking, thinking. And sometimes it'll show you the chain of thought of what it's thinking. But what they found is the more time you give it to think, the better and more reliable the answer becomes. And so the models are getting really good at that. They're tying different tools to that thinking process. So it can go and run a search, it can do a calculator, it can write code behind the scenes, it can extract things from its context window or its memory to personalize the answer. So all of that is these reasoning models are just going to get better. We've seen some really strong improvements in them and that we expect that to continue. And the other is the autonomy and the reliability of AI agents, not just in like AI research, but starting to find its way into like the functions of like marketing, sales, service operations, things like that. And so reasoning and agents are two things that I expect to be significant to the question of on device. This is actually a really, really important near term question that's going to actually have ripple effects throughout the economy and specifically in Wall Street. And, and that is like, I'll just frame this and then we'll move on. I'll come, come back to this probably on. I'm probably going to actually talk about this on episode 186. Yeah, next week. So Apple obviously has dropped the ball on artificial intelligence. I've mentioned many times. Siri is not good. They, they have just fumbled many times their efforts to try and catch up. I think what Apple's play is, is they're accepting the fact that they are not going to be a Frontier lab building the biggest, best models. They are going to be a distribution channel for those models through iPhones and iPads and Macs. And the bet they're making is that if you take like the most advanced models Today, like a Gemini 3, it requires going off to the cloud. Like you have to go up to the cloud to get access to it because of the Compute requ the bet Apple is making I think is that a Gemini 3 or even a Gemini 4, they will be able to compress that model and serve it up to you on device probably within one to two years. And so you will have the previous generation state of the art model on your phone without having to connect to any cloud. And so I think that's Apple's bet is that these models get somewhat commoditized and they can serve up the best model to you on the device with complete privacy, low latency. So that would change things, that would change the equation of what is the value of a proprietary frontier model. But again this is like intro to answer some. I'll probably stop there but listen to the episode next week, the weekly and I'll go into a little bit more about this and refer you a couple of sources. You can go learn more about this.

37:09

Speaker B

Great number 12 AI is already being introduced in major conglomerates. Do these companies understand AI well enough before reducing their human workforce?

41:04

Speaker A

So most of the time no. I think what happened, and I kind of projected this back in 2024 that this is what would happen is there would be pressure on private equity owned companies, VC funded companies and public companies to, to drive efficiency gains from workforce reduction because there was a belief AI meant that you didn't need as many people. Now I actually do believe that is true. I think when properly integrated into a business you don't need as many humans doing the same amount of work and and many companies will choose to reduce the workforce as a result of that. So AI just to level set for people who are. Maybe this is like one of the first times you're like listening to this kind of thing. So I'll level set here. What I mean by this, AI is incapable of doing anyone's job today. I don't care what the knowledge work field is obviously doctors, but you could get into to marketers, to CEOs, to SDRs, we talked about them at the beginning to HR reps. Like it cannot do anyone's full job, but it is increasingly good at doing tasks within that job. So if you take like let's say an associate within a law firm, you know, maybe they've got a law degree and maybe they got like three to five years experience today. Maybe the AI can do 20% of the current work of that person. So if you take the tasks and you just say you lock those in and they remain static, we're not adding any new capabilities to that person in a new task. You just take the things they do each week, each month, maybe it can do 20% of that. Now, if you take 10 of those associates and AI is doing 20% of all of their work, you don't need 10 associates anymore. You need maybe eight or you need seven. So that's what I mean is like it. It can immediately reduce the workforce if your company isn't growing and if you're not creating new needs of those associates. So ideally, what you're doing is you're taking a bunch of stuff that isn't happening each month and you're redistributing that 20% and that associate is now doing these new and exciting things that weren't possible before. Not every company is going to do that. Many companies won't do that. Many companies will take the easy route, the obvious route of they will reduce the workforce because they themselves don't understand the opportunity to drive growth and innovation by redistributing workforces into these things. And they're going to be under tremendous pressure to just cut costs if they're not growing. And this is why I've said many times it was like the focus of my keynote at Macon this year and the workshop I ran, our only path forward is innovation and growth and entrepreneurship. If we don't create the need for more work and more jobs, we will see an onslaught of workforce reduction in the next 18 months. Really, like millions of jobs will be reduced if we don't accelerate growth.

41:16

Speaker B

Yeah. So last year on our AI for Agency Summit, I was interviewing Amanda Todorovich at the Cleveland Clinic and I asked her, you know, like, have you gotten to your wish list? Like, has AI helped you enough that you are now onto the things you've been trying to do for the past 10 years? And she was like, kind of laughed, like, no. Yeah, I would love to ask her that today. Like, how much have they advanced over the past 12 months that they are starting to get into some of those things? What are people able to do now that they have another year under the belt with their team?

44:17

Speaker A

Yeah. Yeah. And again, like, I don't mean to be doomsday about this, but I'm going to tell you point blank. I have met with the people who are being told to have a 10 to 20% reduction ready to go. Like, they are basically sitting on call at any moment to get the go ahead from the board or the C suite to cut 20% of their team. And this is not a single conversation. This is many conversations. So I know for a fact from the people who are in charge of the budgets and the people they are being told to be ready to do massive reductions of workforces. I also have knowledge of other reductions that are already in the pipeline that we will learn about in Q1, Q2 of next year. Massive reductions. So this is, this is very, very real. Like we have to accelerate growth and innovation as, as a society like this, as an economy like, this isn't an option. And anyone who tells you this isn't going to lead to workforce reduction, including some of the leaders of the current administration, that is not true.

44:43

Speaker B

Okay, number 13, what are the main factors that could slow down the advancements of AI? I think of factors such as government regulation and societal revolt. What could delay the inevitable?

45:51

Speaker A

Yeah, so I actually I, I post, I wrote about this in our, My Exec AI newsletter this week. So I actually outlined this. So if you get the Exec AI newsletter you can go reference that. If you don't get that, then let me know, go subscribe for it. But let me, I'll like as we're talking. So again, like I don't actually don't read these questions in advance. I don't know what Kathy's going to ask me. So as, as I'm answering this, I'm actually going to pull up my, my newsletter. So here, here's the ones I identified. What slows AI progress down Breakdown in AI compute supply chain so obviously like AI is dependent upon Nvidia and TSMC which makes the chips. Nvidia, you know, works with TSMC to create these chips and sell them. If there's a breakdown in that supply chain for any reason, lack of value created in the enterprises. We've heard murmurs of that. There was that MIT report that everybody loves to cite that was completely not factual. But like if the belief is that you can't create value with these things, then people stop buying them things. Slow down IP lawsuits that could make the existing models illegal. Don't think that's going to happen. Restrictive laws and regulations. That one is something we talk about a lot on the podcast for this reason there's a lot of effort at state level to put laws in place and regulations that, that hinder the acceleration of AI progress. I would say the scaling laws not working. So like we've at some point like the reasoning things just stop getting smarter. The post training stops working. Like again, I'm not in the labs, but everybody in those labs is saying that is not happening and we don't see that happening anytime soon. And then one of the other ones I mentioned is this idea of like a voluntary or involuntary halt to model advancements. I actually, if I had to force rank things that could happen. I think that there is a chance that that one at least becomes part of the conversation, that that anthropic and OpenAI and Google see a major breakthrough on the horizon. They've proved it out in experiments in labs and they don't know how to control it. So if you listen to the podcast this week, we talk about recursive self improvement. That's specifically what I'm referring to here. If they find that these things are able to actually improve themselves and we are running the risk of a fast takeoff that we lose control of and actually we'll talk about on Tuesday. There's a new foundation formed yesterday where OpenAI and anthropic and others are actually collaborating on an agent framework like a standards for agents. That to me tells me they're talking at high levels about really important topics. And I know that this is one of the things that is discussed is like at what point would we actually need to slow down? Now the reason I don't think that happens is because if the US slows down, China won't. It'll be their chance to catch up. So yeah, those are some of the things that could, could be it. But again, if you go get the newsletter, it'll, you know, kind of walks you through those. And I also talk about those in the AI timeline course, in the AI Foundations course series that I mentioned, the Fundamentals course series.

46:04

Speaker B

Great. Last question. I always try to end on a happy note, but I'm sorry, we're not.

49:02

Speaker A

We'll have to come up with one more after this one.

49:07

Speaker B

Yeah, I'll ask you about your holiday or something as number 14, as AI systems move toward recursive self improvement, which you just talked about, what guardrails are needed to ensure they aren't learning from distorted or incomplete views of the world? Especially given today's concerns about censorship, rewritten history and biased information sources.

49:08

Speaker A

I don't have the answer for this one that this person's probably hoping I do. So again, the recursive self improvement is the idea that these things start to learn to improve themselves without a human in the loop. Like maybe a human lightly in the loop to start, but eventually it just constantly 24 7, 365 is just improving. So the way to think about this is like, you know, imagine the model comes out and it's basically like a teenager and like a human. So we equate this to like a human world. Like humans are recursively self Improving, we observe the world, we go to classes, we read things, we watch things, we learn and we like make better decisions. Now we get guidance from our parents, we do these things, our teachers. But like we're improving machines. Basically the AI is very reliant on the AI researchers to do this improvement through like post training things. It comes out, it's like got these capabilities, it's got this intelligence, but it doesn't keep improving itself until there's a new model run. What they're basically premising is that we can treat these things or they will start to function much more like a human where it just starts learning from everything. And the problem is they become like PhD level in theory, like in days or weeks, like, and then they go beyond PhD level and they become like beyond anything we're even able to understand what they're doing. And so that's the premise of recursive self improvement, is you create these models that can improve themselves through real world understanding, through what it sees in the world. Once you put computer vision tied to this, to the things it's learning from, real time news and things like that, and it starts to learn the way humans do. Guardrails. None that the labs don't put in place themselves, but each lab right now, because there is no regulation around this, makes its own decisions about what's right and what's wrong. So you basically are in a position where you're saying, I kind of trust Google would, would maybe like have some guardrails in place, but would Xai put the same guardrails in place? Like, is Elon Musk, who's racing to catch up and get ahead, gonna have the same self control about slowing down recursive self improvement as others? I don't know, maybe he would. Is China, if they figure it out, or if there's an espionage thing and they get access to what's going on at one of these labs in the US and they figure out how to go do it, are they going to stop it? Probably not. Like recursive self improvement is maybe like the unlock that leads to super intelligence. And that's what everyone's racing towards. It's why they're raising all this money and building all these data centers and they need to justify that investment. So unfortunately, barring any federal regulation, which I do not see coming in the United States anytime soon, it's on each lab. And then trusting that those labs talk to each other and work together to solve this. Okay, what am I most excited about for next year? That's a Good one. So I'll end with my own question of myself. What am I most excited about? I think a lot of this stuff is really smart people who want the best for society and for humanity are working on these things and thinking about these things all the time. And I have a sense of optimism that they will figure out the really hard things and the rest of us are going to go through this golden age of innovation and creativity and entrepreneurship and reimagining careers and businesses. And I think it's really, really good to pay attention to these other things and to talk about them and ask questions about them and have debates about. And I think we should be doing more of that. But we shouldn't let it replace in our mind or cause this, like, overbearing sense of fear and anxiety because we get to live through maybe the most innovative phase in human history. Like, we're going to solve diseases, we're going to, you know, go to other planets. We're going to discover things we would have just never discovered scientifically in the next five to 10 years. You know, it's just going to be an amazing time of innovation. There's going to be hiccups, there's going to be missteps, there's going to be unfortunate events that occur. Like that's part of human history too. Like, is always going to be that when progress is happening. But I, I generally choose to be optimistic about what's possible. Otherwise I would just curl up in a ball and stop doing what we're doing. So I think that with enough conversation, enough focus on a positive outcome for this, the net in the end is going to be a really good thing for society. But we have to be honest with ourselves about the, the, the roadblocks and obstacles that are going to happen along the way and the pain points we're going to have to go through. But not talking about them doesn't help at all.

49:26

Speaker B

Right. Okay. So I would just say to everyone listening, a couple of things you can do. December 14th. We've got our scaling AI class. We have our intro class that we're running January 15th. If you are listening to this and you wanted to hear that whole presentation, Paul and Mike are out speaking all next year. I can help you with that. There's a book, we've got a community, we've got our courses, we've got a lot of ways we can help you, big and small. So please stick with us. And we'd love to figure out how we can help you and your business grow next year.

54:20

Speaker A

Yeah. And the Academy. SmartRx AI is the Academy stuff where I was, I referred to that numerous times throughout. So just so people know where that's at and we'll put that in the show notes as well. All right, Kathy, thanks and awesome questions. Thanks to everybody. And what we had. How many people registered for that intro?

54:53

Speaker B

That was like 2,000.

55:09

Speaker A

Yeah, we had like over 2,000 people registered. So these, again, these questions come from that audience there. We, we had dozens of questions we didn't get to in that, that class. So we'll do the same thing with the scaling AI whatever we don't get answers to. Timed answer there. We'll, we'll do another special edition episode of that.

55:10

Speaker B

And thanks as always to Claire for helping us get this all put together.

55:27

Speaker A

Absolutely. All right, thanks everyone. Have a great week. Thanks for listening to AI answers. To keep learning, visit SmarterX AI where you'll find on demand courses, upcoming classes and practical resources to guide your AI journey. And if you've got a question for a future episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions about AI.

55:31