The Artificial Intelligence Show

#202: AI Answers - AI for Marketing, Sales & Customer Success, Marketing Agent Swarms, Entry-Level Job Disruption, Environmental Impact and AI Privacy

59 min
Mar 12, 2026about 1 month ago
Listen to Episode
Summary

This AI Answers episode addresses 15 questions from business leaders about implementing AI in marketing, sales, and customer success. Paul Roetzer covers practical topics from getting started with AI adoption to managing privacy concerns and the future disruption of entry-level jobs.

Insights
  • AI agents will enable companies to create marketing team swarms that can replace traditional human teams by end of 2024
  • Entry-level jobs focused on narrow, repetitive tasks are most vulnerable to AI disruption as managers can now do that work in minutes
  • The key to proving AI value is running internal pilots with specific use cases rather than relying on external studies
  • Leaders must model AI usage and develop high AI literacy to successfully drive organizational adoption
  • Document discipline and organized workflows become critical as teams work across multiple AI platforms simultaneously
Trends
Marketing agent swarms replacing traditional marketing teamsSaaS companies pivoting to sell AI agent teams instead of software licensesEntry-level job market disruption accelerating across knowledge workMulti-model AI workflows becoming standard practiceAI adoption shifting from IT-led to business-led initiativesCompute efficiency improvements driving down per-token costsCompanies using AI to reverse-engineer and clone existing softwareInnovation cycles compressing from quarters to weeks or daysAutonomous AI agents operating with minimal human oversight
Companies
Google Cloud
Partnership sponsor for AI for Departments webinar series and blueprints
SmarterX
Paul Roetzer's company offering AI Academy courses and webinars
OpenAI
Creator of ChatGPT and GPT models discussed for AI workflows
Anthropic
Creator of Claude AI models and Claude Code agent platform
Google DeepMind
AI research division led by Demis Hassabis working on energy solutions
Amazon
Example of AI agents going haywire and causing issues at AWS
New York Times
Conducted blind taste test showing AI writing preferred over human writing
xAI
Exploring off-earth data center solutions for energy efficiency
People
Paul Roetzer
Host and founder/CEO of SmarterX and Marketing AI Institute
Mike Kaput
Co-host of the show and SmarterX team member
Emmett Delrose
Google AI transformation manager who emphasized leadership modeling AI usage
Demis Hassabis
Google DeepMind leader working on AI energy efficiency solutions
Jensen Huang
Referenced for calling OpenClaw the most important software ever released
Andrej Karpathy
AI researcher who tweeted about people running autonomous agents overnight
Todd Saunders
Broadloom CEO who exposed unethical AI cloning strategy by VC-backed startup
Quotes
"These things are not reliable and in many cases they are not safe and people are definitely racing ahead and using them regardless. And we all could be collateral damage in that grand experiment."
Paul Roetzer
"I think by the end of this year you will be able to do that. You could do it right now. I mean, Mike, if you and I had a week, we could turn Claude Cowork into that."
Paul Roetzer
"If you're like, but I don't know what that is, just come to the free intro class. Right. Like if you just attend the intro to AI class, you will have the frameworks to go figure this out."
Paul Roetzer
"This isn't a technology thing. This is a complete business transformation that has to be fueled by AI literacy from the leadership level down."
Paul Roetzer
Full Transcript
2 Speakers
Speaker A

These things are not reliable and in many cases they are not safe and people are definitely racing ahead and using them regardless. And we all could be collateral damage in that grand experiment. Welcome to AI Answers, a special Q and A series from the Artificial Intelligence Show. I'm Paul Raetzer, founder and CEO of SmartRx and marketing AI institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of AI. But we never have enough time to get to all of them. So we created the AI Answers series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing. Whether you're just starting your AI journey or already putting it to work in your organization, these are the practical insights, use cases and strategies you need to grow smarter. Let's explore AI together. Welcome to episode 202 of the Artificial Intelligence Show. I'm your host Paul Raitzer along with my co host Mike Put. We're recording Wednesday, March 11, 9am Eastern Time in the middle of a thunderstorm in Cleveland. So hopefully this we can do this straight through without any power loss. This is a special edition, so this is not our regular weekly, so don't get confused if you are a regular weekly recap listener on Tuesdays that we drop this is a special AI Answers edition. So AI Answers is our 15th episode of this series. AI Answers is presented in partnership with Google Cloud. This is a series that we do based on questions we get from our monthly Intro to AI and Scaling AI classes. So if you haven't been to those, every month I teach a free intro class and a free Scaling AI class and we usually get dozens of questions during those live sessions. And so we use these AI Answers episodes to answer the ones we couldn't get to. But this is even more special version of our special AI Answers series. This one is actually based on questions we got during our AI For Departments week, which was also presented in partnership with Google Cloud. So AI for departments February 24th to the 26th this year we released three blueprints, AI for marketing, AI for sales, and AI for customer success. And we did that with a webinar each day. Those webinars had thousands of people registered for them and so the questions we got were amazing and we could not get to all of them during the live sessions. So we decided we'll do a special AI Answers Edition that answers questions that came from our audience during those three webinars. So Mike has curated this he's going to go through and pick some, he's picked some questions from marketing, from sales, from customer success. I have not looked at them. I, I prefer to do these the same way. I do it in the live environment where I don't see the questions until Mike asks them. So you can learn more about both of these, the webinars and the blueprints. You can go to SmartRx AI forward slash webinars. All three of those webinars are available on demand now. And then you can go to SmartRx AI forward slash blueprints and you can download ungated each of those blueprints or whichever one is most relevant to your work. So again, thanks to Google Cloud for their partnership to bring these webinars and blueprints to life. You can learn more about google cloud@cloud.google.com and I think, Mike, that covers everything. Am I missing anything here? No, no, that's it.

0:00

Speaker B

That's some good context into the blueprint webinars. And like you said, we had such a huge audience for those. They were super popular.

3:35

Speaker A

Yeah. All right. And then this episode is also brought to us by AI Academy by SmarterX. AI Academy helps thousands of individuals and businesses accelerate their AI literacy and transformation through personalized learning journeys and an AI powered learning platform. There are currently 13 professional certificate course series available on demand for mastery members and individual purchase with more being added each month. We just released our newest course series, AI for Financial Services and AI for Finance covers real world applications of across banking, insurance, wealth management and more. That's the financial services one. So you can start applying AI strategically in your organization today. So the AI for Financial Services is part of our industry series and AI for Finance is part of our department series. So we have. Mike, let me see if I get this right. We have Marketing, Sales, customer success, hr, HR and finance.

3:41

Speaker B

At the moment we are about to do operations, IT and I believe legal to round out kind of the initial, initial functions.

4:36

Speaker A

Yeah. So in the next couple months we will have the vast majority of departments within an organization covered. And so they're a great starting point for people in your organization who maybe haven't figured all this out. It's a great kind of 101 to 201 level. It really gives practical knowledge about it and that's how all of the certificate series are structured across departments and industries. And then our foundations collection which has fundamentals, piloting and scaling in it. So great stuff there. Go check it all out at Academy. SmartRx AI. All right, Mike, I think we've got 15 questions. It looks like slotted. We'll see if we can get through all these in about an hour.

4:43

Speaker B

All right, Paul, so first up, someone asked, what is the best way to get started with learning all we need to know and implement about AI? Specifically, I believe this person is asking as a cmo. So leaders, how do I get started wrapping my head around all this?

5:18

Speaker A

All right, so this was not a planned plug, but do we not have an AI for CMOS webinar coming up? Mike?

5:36

Speaker B

We indeed, yes.

5:41

Speaker A

Is that about announcement?

5:43

Speaker B

No, that has been announced. Yes. So don't worry, that's no secret.

5:44

Speaker A

Yes, I would say hold tight. Do you remember when that's coming up?

5:48

Speaker B

Yeah, we've got it. It is March 26th at 12pm Eastern. It'll also be made available on demand, so it's again at SmarterX AI webinars. You'll see it right there.

5:53

Speaker A

And that's going to come with a blueprint as well, right? Yep. There we go. All right, so the answer to your question is. Join us March 26th for a webinar that actually explains all of this as well as a download blueprint. Now, that being said, the way I've been thinking a lot about AI adoption in organizations, whether it's at a team, department or organizational level lately, is the need for leaders to have a higher degree of AI literacy competency. What I mean by that is the CMO is going to be the one that's going to have to push the team to figure out how to apply AI for efficiency, productivity, creativity, innovation. They're going to be the person who's going to have to deal with the employees who don't want to learn AI, don't want to do it. I mean, the CMO is often overseeing the creative within an organization. There are a lot of creatives, whether it's writers, designers, video producers, who see AI as a threat to what they do. So, you know, for the cmo, it comes with, I think, at the starting point, just a deep understanding of what AI currently is capable of doing, what these different models are capable of doing across all their, you know, again, not just text in, text out, but audio, video, code design, you know, in terms of image production, video production. So you have to have a deep understanding. Reasoning would be a really important one when it comes to strategy. You have to understand all that. And then you have to be modeling use of these tools for your employees so you don't have to be playing around with all the image generation and video generation tools and Things like that, but you need to be using it every day. So I guess for a cmo, the starting point is understand it deeply. And like our Foundations collection on AI Academy is that, like, you could come to our free Intro to AI class. If this is all new as a cmo, or you have a friend who's a CMO and it's all new, go to the free Intro to AI class. But if you're part of our AI Academy, take the Foundations collection. And I mean, literally 95% of what you need to know at a decent confidence level you will get from those three core series alone within the Foundations collection. So. Or just come to the free March 26 event and do the AI for CMOS webinar. Download the blueprint. So that would be my. My kind of quick advice. Anything to add there, Mike? Anything you're thinking of?

6:03

Speaker B

No, it's just really good. A reinforcement of something that we mentioned on the actual webinar is one of the experts we interviewed at Google, Emmett Delrose, who's a manager of AI transformation there. She mentioned basically the best organizations doing this that she sees have leaders that model this stuff, that talk about this stuff. And that's something. One of many things I very much appreciate about your approach is like, you're always telling us all sorts of stuff about how you're using AI, and it's a massive inspiration and motivation for. For the rest of us.

8:17

Speaker A

Well, I think, you know, not just me. I mean, you do the same thing with the podcast, for sure. And even internally, I think there's things you're doing all the time and just sharing with the team. But that is the key. And I think a lot of times, even with a podcast, there's things I'm doing where I'm like, I don't know. This is pretty basic. Like, I don't know that I should share this. And then I'll share something that to me is just sort of second nature at this point. Yeah. And then I'll go do a speaking gig and someone will come and say, hey, this is a podcast. That example you gave was amazing. I actually went and built something, I went and used that, and I taught somebody on my team something. And so I think that there's a lesson in there for all of us that don't take for granted the knowledge you have and the capabilities you have and the things you're doing with AI that you might think is basic to a lot of people that might change the way they think about AI. So, you know, definitely think about your own use of it and then think about sharing and modeling for others how you're doing that, Whether it's through LinkedIn posts, internal Slack messages, you know, running lunch and learns, whatever it is. And, you know, I think CMOs being a leader in that is a great way to think about it. And then if you transfer that to other departments, obviously we're talking specifically about CMOs, because that's the question. But this goes for the leaders of any department.

8:50

Speaker B

All right, question number two. When we're using this term agent, you know, referring to AI agents, what exactly is that? Can you help explain the difference between an agent and just a regular, regularly used task or prompt that you're repeating with AI?

9:56

Speaker A

So phase one of generative AI was text in, text out. So that was, you know, chat GPT. When we first started using it in 2022, you would put a text prompt in and you get some sort of text output. Same would then apply to images and video. You were just putting text in and then you would get an output from the machine. Agents are AI systems that can take actions to achieve a goal. So they can, in some cases develop a plan of what they're going to do. They are sometimes given access to different tools like the Internet to conduct searches. And so they can, you can say, okay, I want to produce a research report. And then rather than just producing a report from its knowledge base, it goes and searches the web and then it curates information on the web and then it synthesizes that and then it writes an output. Then it verifies its sources so it's actually going through and doing a sequence of actions. And so you can imagine if you're, let's say you're going to run a marketing campaign because it looks like this question came from the marketing one. You know, if you say, you go into Claude and say, okay, help me write a landing page. And it writes the landing page. That's just a simple chatbot, like just creating an output. Then you say, okay, now let's go through and let's build an entire marketing campaign around this. Well, now it's going to go and build a plan of how it's going to do that. It might need to call on, you know, different knowledge base you've given it access to. So it's going to go and start taking actions that might be 10, 15, 20, 50 things, and then it'll eventually create all the final outputs for you, but it's actually going and doing a bunch of things and taking these actions, and then you may actually have it set up where it's like, go ahead and do the thing, send, send the emails, do the ad buys. That's agents. Now the confusion comes in with how autonomous these agents are. Many of the agents you would be using in your regular workflows at a, you know, an enterprise are going to be pretty basic in terms of their autonomy. There's still maybe some rules built in there. It's like the humans really heavily in the loop. The stuff you hear about with like Open Claw that we've been talking about, that's much more autonomous, where people are trusting these things with access to a bunch of information to just do stuff, in some cases leaving them running overnight. We'll talk about Andres Karpathi had a tweet just this week. We'll talk about an episode 203 next week where people are on the frontiers on the edges, really starting to push the agentic side, the autonomous part of the agentic side. And that's going to create for some pretty interesting environments.

10:13

Speaker B

So question three is something we hear a lot of variations on, not just in marketing though. That's where this one came from. Do you think AI labs will actually fix the negative impact their technology is having on the environment?

12:47

Speaker A

I think they think they will. I don't know that you and I might have any specific inside information about how exactly they'll execute this. But in essence, like just for reference, if people aren't kind of familiar with this topic in context, a few years back, a lot of the AI companies set out to be carbon neutral. Like they wanted to keep their impact on the government neutral or actually a positive impact, actually give back energy. When AI exploded in 2022, that just got thrown out the window. It became build data centers, consume energy, build as much intelligence as quickly as we can build. And so we have to delay this idea of the, you know, the neutral impact on the environment. Now many of them think that if they build more intelligent systems, these super intelligent systems, those intelligent systems will solve for this. Right now the way they're solving for it is they're making every year or so the cost of compute drops like 10x. So they're making more efficient algorithms that use less computing power per token of output, I guess is the way to think about this. So, you know, the energy it took to write, let's say a 10 page research paper a year ago or a 10 second video from, from Sora it the amount of energy it would have taken to do either of those things has, has dropped 10x probably in the last 12 months.

13:04

Speaker B

Yeah.

14:35

Speaker A

So to do the same thing now, the demand for those outputs is on an exponential so net we are requiring way more energy, having a way greater impact on the environment because demand is rising. But they're doing, they're satisfying that demand more efficiently. So that is their current path to do it. But they are all looking at solutions in terms of different energy sources and how to get it more efficiently, you know, including off earth stuff where the data centers live in satellites through like XAI and stuff. So I am, I am concerned about the environment like, like many people I don't know that there's too much we can all do about it at this point. The couple things I've talked about is use the more efficient model and get really good at prompting. Like those are two actual things we can all do. The better you are prompting, the fewer tokens you're going to use to get the output you're looking for. That's probably the most honestly like the most immediate action most people can take. Other than that, I don't know. I'm up. I believe in Demis Hassabis and Google DeepMind in particular and I know that they are focused on energy. It's one of the key things they're thinking about and I think if anyone's going to solve it, I think Demis has a decent chance to do that, you know, in the next five to 10 years. So I don't know, I choose to be optimistic. I just, I don't know exactly how it happens.

14:35

Speaker B

Yeah, I would just add there too related to the previous question, like my gosh, you will start to see how many tokens at the moment agents start using. When you start running these in quad code or something. I'm like oh my gosh, this is highly compute intensive.

15:59

Speaker A

Yeah, we talked about, I think it was the Jensen Wong quote on Tuesday's episode about openclaw being like the most important software. What do you. What was.

16:16

Speaker B

It was like the most important software I had ever been released basically.

16:24

Speaker A

And then they open sourced a version of something like that yesterday. I think we'll talk about it on the show next week. But yeah, that's the idea. It's like agents. So something that's taking all these actions requires way more computing power, inference, which is when you and I would use it than like a standard text, just like video requires more tokens than you know, text chat, images, things like that. Agents are going to be a massive drain. It's why they're building all these data centers and getting prepared for a world where agents are in everything in essence.

16:29

Speaker B

Question 4. It seems like there's an assumption that AI will enable us to produce better work, but a lot of us feel like we're expected to take that on faith. What do you say to people who aren't convinced that AI will lead to better performance or will make their work more meaningful or you know, more valuable or have a bigger impact?

17:04

Speaker A

Basically I think about this just, and we, we talk a lot about personalized use cases. Yeah, I would just say, okay, like if there's something where you're feeling that way, what is, what is the use case exactly that you're not sure it's on par with like a really qualified human to do that job. And I would develop some, you know, evals as like the industry just means evaluations or benchmarks. That's like, okay, I write this research report every month or I do this performance report every Sunday night, or I'm in charge of these blog posts or these emails or this proposal or this talent review or this meeting summer, whatever it is that like, you know, just take a basic use case. I would make sure you're using the best model available. Oftentimes people who have this concern are using the free versions of these tools and they haven't used the advanced reasoning models like a 4.5.4 thinking from ChatGPT versus whatever the baseline model is today. So I would say take personalized use cases and then solve for it. Now I think more and more, you know, we talk about this move 37 moment. We touched on it again on the, on episode 201. I think more and more it, it's becoming very hard to, to say that a human just is way better at a knowledge than a machine. We'll talk about on episode 203. New York Times just ran an experiment with AI writing and it was a blind taste test in essence. Like here's a paragraph from an AI, here's a paragraph from a human writer which you prefer and the AI wins. And it was like across like 86, 000 votes I think. Like it was not a small sample size and that, that's, you know, writing is just one use case. And but I think more and more over the next one to two years, there's just going to be very few tasks left where if you did a blind taste test, that the AI isn't going to be at least on par with a, a highly qualified expert in that field. I, I, it's a very hard reality for all of us to come to, but I feel Pretty deep conviction about that one.

17:26

Speaker B

And, you know, I really appreciate this question, so I'm not knocking it at all. But when you ask, like, are we. A lot of us feel like we're expected to take that on faith. The solution to that is just go use the tools and, like, kick the tires. Right? You don't have to take anything on faith. You can go try for yourself.

19:38

Speaker A

Yep.

19:53

Speaker B

All right, question five. When using AI as a thought partner, is ChatGPTs tend to or another model's tendency to be agreeable an issue here? Like, how can it give you valuable feedback if it's configured to essentially agree with you by default?

19:54

Speaker A

So the term here we've touched on the show is sycophancy, where just like, hey, that's a great question, or that's a great insight. Like, let's build on that. And it never tells you, like, you're an idiot. That's actually a terrible idea. I. I would say that the simplest thing is one, they're aware of that issue and they are. They have adjusted the system prompts behind the model so that they supposedly aren't that way. The other is just say it in your prompt. Like, I would like you to challenge my ideas. I want you to function as a critic. I want you to challenge this as though you're this person. Like, just tell it that. So even if it's system prompt enables it or. Or its default is to tell you you're brilliant and that every idea you have is great, just tell it to take the opposite point. Steel, man, this, you know, position on this idea or, you know, assess this writing as though you're an editor at the New York Times. Like, just tell it to be critical and it. It will function in that way. So that is the fastest solution. Do you have any tips, Mike, that you've used with yours?

20:10

Speaker B

Yeah. No. I think that you hit the nail on the head, giving it explicit instructions to be that kind of skeptical critic, to argue with you. Whatever level of this you feel comfortable with, trust me, you will have the opposite problem if you do this right, where you're like, no, I actually, like,

21:09

Speaker A

you go back to telling me I'm brillian.

21:26

Speaker B

Yeah, you'll come away thinking, like, I'm not good at anything.

21:28

Speaker A

You know, I did this when I was creating courses for AI Academy. Last summer and fall, I built a AI learning assistant. I've talked about this on the podcast, but in its system instructions, it was very specifically told. I want you to challenge everything I give you when I give you a course that I've created and you're reviewing the deck for me. I want you to challenge my ideas. I want you to ask difficult questions. And it, it works. And it does. Like at some point you're like, all right, dude, that's a, that's enough, back off. Like, I. Right, I get it. But yeah, you gotta, you kind of turn the temperature down a little bit.

21:31

Speaker B

All right, Question six. What has been any of the efficiency gains reported by the use of generative AI in the marketing world? This person said, we are jumping into AI use completely, but in the interim, our staff seem to think we have to get this job done the old fashioned way. So I think they're trying to kind of look for some proof here about what are we hearing when it comes to efficiency gains in marketing thanks to AI.

22:05

Speaker A

Think this goes back to the question we answered earlier, Mike, where just pick your own benchmark. You can go find all these different reports. Mike and I could give you stories of things we're doing in two hours that used to take 20 or 50 or 100 and you can have all those stories you want. Just pick something internally, do a pilot of it and say, okay, traditionally here's something we do every month. It takes us 17 hours on average. We looked at the previous data or we went through on a task by task level and estimated best knowledge we have of how long it would take in a normal environment. And then we're going to do this with AI and we're going to make sure that people are trained to actually use the AI properly, how they handle prompts, things like that. And then run your own pilot and say, wow, we did it in two hours instead of 17. So like create a few of those and now you've got the business cases, now you've got the proof internally and there's just, there's nothing better than your own proof. You can recite all the reports you want. Just. And to your point earlier, Mike, it's 20 bucks a month. Like just spend the 20 bucks a month on the paid version. If you don't have it, get approval. If you're in a bigger enterprise and you're trying to prove out the reason for buying like a Jasper or Writer or you know, a ChatGPT enterprise or Google, Google Gemini for the team, whatever it is, pick a business case that means something internally and show them that and say we can stack these and then we can get to 10% efficiency gains, 20%. You can get to like 90, 100 pretty easily. But no one's going to Believe that. So start with believable numbers or just show, like, you know, the reality.

22:32

Speaker B

Yeah, I. I don't know about you, Paul, but I want to kind of communicate to our listeners. Like, I literally speak to leaders and audiences as part of my job. And I can tell you there's no study or stat that's gonna, like, from McKinsey or whoever. It's helpful information, but it's not gonna make people wake up more than you saying, like, hey, this thing we all are familiar with, we used to do it this way, now we do it this way. And look at the difference.

24:14

Speaker A

Well, and to one of the earlier questions, like, if people aren't don't have a high degree of AI literacy yet, yeah, they are. They are inherently going to be like, it can't do what I do. And we hear it all the time. And it can be something as basic as, like, writing a newsletter or something. Like, I can't write the newsletter the way I do. It's like, yeah, I can't. I'm sorry. Like, it can. Yeah. So that's something you have to deal with as well. But I would say again, like, focus on individual use cases, business cases. Prove that out through your own pilot, your own data. Do a few of them if you need to. And then that's actually one of the best ways to drive adoption across other departments. Like, if marketing's leading the way and you want to get sales on board, or the success team or the finance team, just show them a business case of something they do. What, what. What about your job don't you enjoy? Like, what's the thing you would love for AI to help you with that you'd rather let go? Don't take the, you know, the thing that they love and care about that gives them fulfillment. Take the thing that they hate and then show them how to do that.

24:38

Speaker B

All right, so, Paul, question seven, I kind of selected because it is very related to this. You touched on this a bit. And I kind of just want to close the loop here. How are people tracking and counting the time saved by AI? Is it really just that benchmarking you're talking about? Fire up a spreadsheet and basically start writing this stuff down.

25:41

Speaker A

For most organizations, probably. I mean, you and I might come from the agency world, where you tracked everything anyway. And so if you're in a professional services firm, you know, like a law firm or a consulting firm or an agency, you're probably used to it, and you have benchmarks to look at. Like when I was running the agency we had 16 years of time data, so we could go back and look at anything and be like, oh, yeah, the strategies take on average 44 hours and the blog posts take on average 3.2 hours. And like, we knew that if you don't track time, which in most enterprises you aren't, then pick very distinct use cases and develop a benchmark. Or like I said, go through and break any workflow into tasks, any project into a series of tasks, and then at least best case estimate, like, all right, like, if I was tracking my time, this planning part would take me about three hours and then go from there. Now, I, I don't know, I should probably get in the Fibonacci sequence here. It's probably like overboard, but we used to use like, Fibonacci. Because what I learned in 16 years of running an agency is humans suck at estimating how much time it takes to do something. It's always wrong. And so we used to use Fibonacci, which is the two previous numbers total, the next number. So, like 1, 2, 3, 5, 7. You know, I'm done. Forget them now. 11, like 13, 21, 34, 55, 87. I, I don't know. Like, yeah, I used to know them by heart, 121, but we would do that. So I would be like, okay, this is a one hour, this is a two hour, this is a three hour, this is a five hour. And you just sort of like ballpark based on it because it goes up by the same percentage, in essence each time. So it remove moves the human error from, like, this takes one and a half or, yeah, points that, like, nobody knows. And we get distracted all the time. And so it's like those, those estimates are never accurate.

26:00

Speaker B

Question number 8. How do you manage the information on each AI platform? Do you keep prompts separately? Each AI platform is getting better and they constantly move positions in terms of who is the leader. So, like, are you porting information between models? Like, how have you managed that?

27:47

Speaker A

I'd be interested to get your take on this one as well, Mike, because this is a daily issue for me because I am now actively working in three models every day. So I do use Claude all the time. I use ChatGPT all the time and I use Gemini all the time. Gemini is baked into Google Workspace. We are a Google Cloud, Google Workspace customer. So that is native, right in the productivity tools we use all the time. But I then use the gemini app separately. ChatGPT came to market first with the best model. And so I have a history of three plus years. Now with ChatGPT, I have a bunch of custom GPTs built in there and so a lot of times I, I gravitate back to there where I just have a history and a memory and like I know it's going to do it the right way. If I'm working on something new, especially if it's a high level cognitive task like a strategy, I will often use all three, three models or sometimes multiple models even within the platforms. So I'll use like a Claude 4.6 opus and a Claude 4.6 sonnet and I'll like compare the differences. So I. This is very messy for me right now. I don't have an answer. I don't know. Mike, have you come up with any ways you're handling this differently?

28:05

Speaker B

It's getting better, but it's still super messy. I think the first thing I started doing for the last couple years is just documenting all of my workflows. So it's not just about which AI tools, it's like, oh, at this step we go into this G to do the thing. So it's like, even if worse came to worse and I had to switch, I'm like, okay, I just need can grab the instructions of that GPT and spin up a project or a gem or whatever. So that's been really helpful. What's really interesting, and I'll see how far this goes, is as I use Claude code, much more Claude code will spin up different skills like these markdown files that tell you how to do something. It's essentially, you can think of it as a prompt, honestly, your instructions for a GPT. What's cool about this? I have all those skills being created, updated and logged in a personal shared Google Drive, like the personal uses of Claude code. So it's like, okay, even if Claude code got nationalized by US government tomorrow or something, right? Is like I can then take those skills and give them to ChatGPT or Gemini. It's like my internal knowledge and skill architecture that could port over. It's not going to be exact, but I've tried it before and it works. So that's been nice. But also that takes a lot of work to build as well.

29:15

Speaker A

Yeah, I don't remember if we talked about this on the weekly, but Anthropic last week announced an import function where they give you a prompt that you can put into one of the other models and it'll basically summarize everything in its memory and then like you give it to Claude and then it in theory, remembers everything. The other thing, you kind of hit on this. But the way I do this internally is so again, I'm the CEO of the company. I. I dabble in all the departments. And Mike can attest to this. Like, sometimes something will bubble up and become a priority for me that isn't part of my daily job. And so I'll have like two days to grind on something while I'm traveling. I've shared a couple of stories like this. Like, our success score for customer success is an example of this. And so I'll spend like 48 hours and I will get it to a certain point. It's like, okay, that's it. Like, I'm tapped out. I gotta move on to, like, my other CEO priorities and I have to hand it off to the team. Well, what I'll do is exactly what Mike explained. I will create a Google Doc. I will create tabs within that Google Doc. I will put the prompts I used across the different models I've been experimenting with. I will have a tab for each of those outputs. And then the team can actually go in and see the different flows. And then if I have the time, I will do a curation of all those outputs into a single. Here is my CEO stamp of approval. Best summary of all. What all the different models are saying and which ones I went and why I did it. Yeah, part of that I do just for clarity for the team so they can see the thinking part of it is to model behavior. It's to say, like, listen, I don't want you to just give me an output. Like, if I ask you, a co worker of mine, to do something for me, I don't just want to see a copy and paste out of Gemini. Like, I want to see what you thought about how you prompted it. And so in essence, it becomes like an audit trail. And I found those to be extremely valuable. And I do that now for almost every strategy doc I'm working on all these, like, innovations I'm working on for the company. I start with a Google Doc and then I just, I journal everything. So I don't like, lose track of which model did I do this in and that in. Because that happens all the time too. It's like, where did I work on this? At which. Which model was I playing around with Success Score in and out, I'll lose track of things. So I also keep a business journal that's just for me. And I'll note in there like, hey, I worked on Success score these two days. Here's a link to that doc. And here, like, I. Because I'll forget I had this happen yesterday. I like found this. I was like, oh, shoot, I don't even remember doing this. This is really good. Like, I, I guess I started this like three weeks ago and I forgot about it because so many times you can just start all these projects and the agent stuff's going to make this worse. So you can just start things and then like two weeks later you're like, I did. I do that? I feel like I started that project somewhere.

30:28

Speaker B

You know, it's interesting. I preach, quote increasingly to our teams internally. Like this really unsexy but important skill is document discipline, if you can manage it. Because like having this, a system that works for you to keep all this stuff, your notes, your context, your knowledge, your working docs, all in like a consistently organized place. That is how you get value out of these tools because you kind of solve that problem after a while where, you know, as we increasingly have agents or cowork or whatever, just pointing at folders or Google Drive or whatever. As long as you have this stuff organized well, you're not going have to worry about like necessarily where your prompts are or things like that.

33:13

Speaker A

Yeah. Yeah.

33:56

Speaker B

Okay. Question number nine. How can we balance using AI and putting a lot of company content into AI with balancing that with privacy? How can we leverage AI in industries that require a lot of security? They gave the example of aids.

33:58

Speaker A

Yeah, I don't know a way around this other than working through it. And legal, like they, they have to, you know, be not only be in the loop, but probably be in the driver's seat from a governance standpoint when it comes to sensitive information and confidential information, personally identifiable information, you know, whatever. What I often encourage people is like, find all the use cases that don't have to touch any of that though. Like let IT and legal do their thing and let them protect the organization and the users and put degenerative policies in place that provide the guardrails for responsible use. But don't let that slow you down from all the use cases that don't require that data. And there's, I mean, in marketing and sales and customer success, there's literally thousands of things you can be doing every day, even in a bank or a hospital system, a law firm. Like, there's all these uses that don't have to touch any of that data. So definitely something you have to think about, definitely something you have to collaborate with the right people internally on. But you need to own finding all the Safe use cases that can let you race forward while they're figuring all this other stuff out. I just, I've talked to way too many companies even in the last couple months that are just sitting on the sidelines because it and legal still have to approve like every use case or tool. And that is, that is not a sustainable model in with the rate of accelerated change we are going through.

34:14

Speaker B

I certainly sympathize with how hard that can be to figure out, but I honestly think some individuals or companies are using this like data thing as an excuse like to not take action. It's like you can be doing so much with just the knowledge in your head.

35:40

Speaker A

Yeah. And if you hear us saying that and you're like, but I don't know what that is, just come to the free intro class.

35:54

Speaker B

Right.

35:59

Speaker A

Like if you just attend the intro to AI class, you will have the frameworks to go figure this out and, and like move the ball forward. Just do not let waiting for it and legal stop you from making Progress.

35:59

Speaker B

Question number 10, are there certain roles or role types within the marketing function that you envision being rapidly undercut or impacted as AI evolves?

36:16

Speaker A

Something I really struggle with how to answer this sometimes all of them is like my, if I'm giving the multiple choice and I get an all the above, I would choose all of the above.

36:29

Speaker B

Yeah.

36:37

Speaker A

I think the ones that actually affect job security and job opportunity is any entry level role that completed tasks and like I don't know how to trying to think like how to frame that. Like if, if you, if someone was giving you a campaign and you were just executing and all you do for your job is like build landing pages and write email copy, write ad copy. You know, if you, if you only do one of those narrow things and you just do it a bunch of like all day long. Yeah, you're cooked. Like that is, that is not a job one to two years out. So I think anything that has very narrowly defined role that is this, like these are the 10 to 15 tasks and, and they're all like AI is really good at all of them right now. And so I think a lot of times that is the entry level. That's why we're starting to see some early data that entry level jobs are very difficult right now because firms like ours, I've said this on the podcast, I would love to hire a ton of entry level people. Like I want to create job opportunities for students straight out of college. I, I don't know what they are right now because when I have an idea to build something. Say, I want to build a new app, or I want to build a new, you know, score, you know, success score, whatever. When I do that, I will just then go in and say, okay, great, we finished it. Now write the landing page, write me the emails, do all these things. And I'm going to hand that to the marketing team. And so I hand the marketing team almost fully baked campaign that they just need to edit and execute.

36:38

Speaker B

Yeah.

38:14

Speaker A

So all the work that used to get pushed down as the CEO, I do it in like seven minutes. That would have taken seven weeks for the team to do. And so once you have, you know, managers, directors, VPs who realize they can just click a button and do most of the work, the entry level people did, that's going to rapidly disrupt the job market. So I don't, I don't know, like, you know, copywriter is obviously a role that's been under attack for a while here. And I think that's going to continue to be. You're just going to need fewer copywriters, you're going to need AI forward copywriters. But if you have two of those, that's the equivalent of like 20, for sure, traditional copywriters. So I think that's what's going to happen is you're just going to have AI infused into a lot of roles. They will evolve, may not have AI in the title, they're just going to become AI forward versions of whatever that role was. And they're going to build a 10x. And I honestly don't even think 10x is an exaggeration at all. It's not based on what Mike and I are seeing every day in our own company. Yeah.

38:14

Speaker B

And I would just add to that, something increasingly goes through my mind. And, you know, this may be uncomfortable to say, but not in a negative way, but it's like with the entry level thing, with the kind of like, tasks thing, if I'm the one who has to sit here and give you the workflow or the series of steps, it's like increasingly irresponsible of me to do that for a human. Like, that's got to be something I should be giving to an agent that can then scale, not to replace a human. But if that is your job, if you're like, well, okay, I take the steps my manager gives me and go and do those, that's a really dangerous place to be in. Right?

39:18

Speaker A

Yeah. And I think we talked about OpenClaw a few times on the podcast. It's been a little bit More of a technical topic. So we haven't gone super deep on it, but maybe we should connect the dots a little bit better on an upcoming episode. The way I think about this is whatever they're doing right now with that, you know, building these swarms of agents that you can just put a task, like direct it to a knowledge base and it can just go do the thing. It is only a matter of months until like, you can see Claude Cowork would be an early example of this. It is just a matter of months until these SaaS companies are selling marketing agent swarms. Like, here's your marketing team out of the box. It's got a media buying agent and a copywriting agent and like all these things that used to live in some of these SaaS companies as templates, like hey Go in or GPT. Basically just imagine like you're just paying for a marketing team and like maybe they sell it for 250,000 a year or whatever, but it's everything you need. Just plug in. And these are agents that have harnesses attached to them that like give them what tools do they get access to, what are the system prompts and then you're literally just buying teams. And I'm honest to God, I'm not exaggerating. Like, I think by the end of this year I could absolutely see companies starting to sell their software in that way where they just pre bake agent swarms to do specific things. And that is going to be very disruptive. But it is 100% coming. Like that is again, like, when I think about things I'll say on the podcast, I generally only say things like, I have a high level of conviction of that one is like on a 1 to 10 scale, I'm at like an 8 or a 9. That by the end of this year you will be able to do that. You could do it right now. I mean, Mike, if you and I had a week, we could turn Claude Cowork into that. Like, this is not hard.

39:55

Speaker B

This is functionally what we are doing piece by piece with stuff like Claude Code or Claude Cowork when you're building these skills to train it to do these things increasingly autonomously, it's just harder to do because it's not integrated right into the systems and the software you pay for every day.

41:41

Speaker A

Coding is the canary in the coal mine. We've said this many times. Like it's. Everything is happening in coding first and then it very quickly comes to the rest of knowledge work.

41:56

Speaker B

And I think this could happen even faster. Because half the battle here is these software companies just trying to figure out the pricing of this. It's like once they crack the code on that, we're going to start seeing this happen.

42:04

Speaker A

And then honestly, how do you position that? So like, like let's say you're a CRM company that sells software and you're, you sell licenses to marketing teams and you find a way to use your CRM as the source of truth and you realize you can build agents on top of Claude or OpenAI or Gemini or whatever you're building the agents on top of and you can create these agent swarms that function as a marketing team. How do you possibly go to market with that message? Because it is a pure replacement play. We are for X dollars. Instead of buying 100 seats, we're going to charge you, you know, $100,000 a month or whatever that number is and you're going to have an in house marketing team. And all you really need is people to be the human loop and oversee and guide those agents and keep them on track. And that's why I think like it's, that is where software companies, the legacy companies could get disrupted real fast is because their inability to say what has to be said that the AI native companies.

42:15

Speaker B

Yeah.

43:13

Speaker A

Will have no problem saying like a

43:14

Speaker B

Silicon Valley startup could come in unapologetic about it.

43:15

Speaker A

Yeah, they're gonna raise $50 million because they' to go after, you know, let's say a $90 billion labor market of marketer. I don't know what the actual labor market for marketers is, but let's just make a number up and say it's 90 billion a year in payroll. You just go after it and you get, you know, 90 billion, you get what, 10%, 1% of that's 900 million I think. Not bad. That's an, that's an easy raise if you're trying to get money from VCs and they're, trust me, it's already happening. Yep, that's how they're doing it.

43:20

Speaker B

All right, this next question, question 11 is a bit sales specific coming from the AI for Sales webinar. So this person asked when AI can start making sales calls. Won't people associate that with spam robocalls? I can't see how this would make a cold prospect. Trust my business because I wouldn't trust any company that had an AI call me. They can't be bothered to pick up the phone so I can't be bothered to buy from them. So certainly one perspective I am Curious about here. Like, how do you look at this? Because we are seeing a lot of companies start to experiment with this type of thing.

43:50

Speaker A

Yeah, I mean, I, I think it's just a pure numbers game cold calling. Most of us hate it. It does work at some small percentage. And, you know, it's like that. I, I'm not pure salesperson, but it's that, like, the. How many calls do I make? And, like, we know my numbers. If I make 100 calls, I'm gonna get three people to actually talk to me or whatever that number is. And now imagine it. It's a agent swarm that's doing this and they can make 10,000 calls in a day. And like now maybe you can just. So you're just playing percentage games. And yes, most people hate it and wouldn't trust that company. But if it's a company that isn't built on trust, which there unfortunately aren't many of them that are just in it to, you know, make money and drive those sales, then they're totally going to use this method and just flood the market and just, Just play the numbers game. Cost them way less to, you know, take shots on goal with a machine rather than a human. So I'm sure this is already happening again. I'm sure you live in this world, but I can almost guarantee you there are. I don't want to use unethical. Like, that's. That's probably not fair. There are companies that play this game, just the cold call outbound spam game, and they play it well. And there's no way they're not. Not playing it with AI right now.

44:21

Speaker B

Yeah, for sure. And, you know, we've talked about this on past episodes. It's like, okay, are we really talking about the issue here being AI or the fact you don't like being cold called, like you just mentioned, it's like, think about in a different context via chatbots. People don't care that chatbots exist. They care that they're bad. If they're good, if these calls are good and relevant and personal and less annoying than a human that knows nothing about you anyway. Who knows?

45:37

Speaker A

Yeah. If. If they can target them based on needs, if they can get the right data set.

46:04

Speaker B

Yeah.

46:09

Speaker A

And they can predict that you're someone who is, you know, a captive buyer of what they offer. Yeah. And you get them at the right moment, then. Yeah. It's just like advertising. It's like a lot of times you turn off ads, but if it's something relevant to you, stop and listen. So if you're in the market at the right time and they can use predictive modeling to figure that out, then yeah, who's to say they're not going to be successful doing it?

46:09

Speaker B

Question number 12. If someone suddenly finds themselves with so many extra hours each week because AI is doing things like handling admin and support work, what strategies would you recommend to make sure that time is reinvested into real growth and competitive advantage rather than just more busy work?

46:29

Speaker A

We often guide to have a sandbox. So as you're working through your AI adoption plan, especially like when you think about scaling AI within an organization, the need to coach people. Like, one is you can give them some time back. Like, maybe they don't have to work as many hours, but the other is to. And you might do this in like a workshop model where you're helping people ideate, but it's like, what other value can I be creating? What are the other projects that I can be working on? What are new ideas? Like, maybe I can take 20% of my time savings, I can put it into innovation. Like, that would be an amazing thing if we had a, you know, a 20 innovation budget for people's time. And you have innovation workshops every quarter and everybody comes up with innovation ideas and as they save more time, they've got their wish list that, that management has signed off and prioritized of. Like, yeah, if you get time, like, these are the innovations we're really excited about. Let's do these. That, that would be a great use of it. And be. And again, because I've said this many times, but to me, the only way we slow down the job disruption is through innovation and growth. Yeah. And so an innovation growth mindset is essential. And so the idea of having an innovation sandbox, which now I'm saying this a lot, I've never actually kind of verbalized it quite in this way and it makes a ton of sense that I'm saying it. We need these internally. Having an innovation and growth sandbox of ideas, that's like, hey, I. The thing I thought was going to take me all week, I actually just did. And it's. It's 11am on Monday. What do I do this week? Now, innovation sandbox, like that would be. That would be a great use of it.

46:47

Speaker B

Yeah. And there was a recent post actually from a software entrepreneur who basically was like, hey, like months are now weeks and days, like with the ability of what AI enables. And he's kind of like, I'm doing our company planning and all this quarterly stuff. Like, we just did it in an afternoon. So that innovation, like getting the wheels turning in that way, you have to ask some kind of insane question sometimes of like, hey, this is my year goal. Could it be done in a day? You know, something like that.

48:19

Speaker A

Yeah, this is, this is a hot topic for us because we're having an annual meeting next week and we're doing an innovation workshop. Like I'm leading an AI innovation workshop. Mike's doing one on productivity. We're talking about rocks for the upcoming quarter. And it is like you have to. Things that seem crazy aren't anymore, whether they're the goals for the company, what can be achieved in a quarter. So what used to be like, all right, I'm going to have these five rocks and this will get me through these next three months. And you know, if I accomplish this, that'll be great. And as a manager, as a leader, I'd be like, yeah, that would be great. But what if you did those in the first three weeks of the quarter instead? Right. Because I'm looking at them thinking those aren't three month projects anymore. And so I do think it again, is a mindset shift. But it's challenging people to think much bigger about what they can do.

48:46

Speaker B

Question number 13. There is this larger conversation around the SaaS apocalypse, which we talked about on previous podcast episodes. When do you personally think it makes sense to purchase software versus try to do things yourself or build things yourself with tools like Gemini, et cetera, for example, like if I want to analyze calls for client sentiment, right. Like that's something AI might be able to do out of the box with the right prompt. Or I may need to consider buying software for it. How do you look at that?

49:33

Speaker A

Yeah, I think generally you're still just going to be buying software. In a lot of these cases. I was trying to find. I'm looking for a tweet right now. There was a, there was something I saw. I think it was this morning. I couldn't sleep last night. I was up at like 2 in the morning and I was just looking at stuff. Yeah, here it is. So this is, I think I put this on the list for us to talk about next week, Mike. But I'll just use this now. So. Okay, so at a high level you're likely still just using the traditional tools. Like there's a reason they're good. What the more likely scenario is how you use those tools and your, your pricing plan may evolve to the point where like you're going to have agents that are just going to be logging in and extracting information and you're not going to need as many seat licenses. And so your relationship with that software company may change but the reality is most companies aren't going and building a CRM. Now if they're distinct narrow use cases where you can, you know, vibe code something as someone with no coding ability like like me and Mike, then maybe you are like I shared the example of like an org chart builder. Like I just couldn't find one so I just built one myself and lovable. That's a more practical example. Like these point solutions, maybe they're for internal use only, stuff like that. So here is a tweet. This guy's Todd Saunders on on Axe Broadloom, CEO of Broadloom. Okay previously at Google. So he tweeted we all knew this was coming but today I heard about it actually happening. A seed stage company backed by a well known VC openly admitted in a board deck that their strategy is to get access to a large incumbent software from a customer, clone the entire, clone the entire thing using Claude code and offer it for 90% less. Not build something better, just copy it and offer it for less. The VC endorsed this as the go to market strategy and even wrote back in writing that it was a good idea using a customer's licensed access to reverse engineer a product and clone it is ethical, ethically bankrupt. I don't know how else to put it. It likely violates terms of service, it may violate trade secret law as well. But I'm certainly not a lawyer and a reputable vc. Putting this in writing in a board deck is genuinely insane. But it's going to happen anyway, everywhere, all the time. I don't know where this ends but we all knew this was coming and now it's here. So I don't know. Like I say, no one's gonna like code a CRM, but this is how the models work. It's how you know the rumor is that this is how the Chinese are currently they're distilling, distilling models from Gemini and Claude and chatgpt by just prompting hundreds of thousands of times, in an essence reproducing the weights and the code to how to build the model. So like you could certainly do and so maybe in the US it's going to be illegal but that doesn't mean other countries aren't going to basically do that. They're just going to get a license and I don't know. So yeah, from an ethical perspective, stick with the first answer. The second answer or second piece of context is to just give you a sense of what is actually happening in the world because it's kind of weird, you know.

50:03

Speaker B

One final point there we've talked about a bit in the past, I think is like, I also wonder, forget building or buying. I wonder also like how this will just change your expectations of your existing software. I'm already sometimes frustrated by stuff that's very valuable that we pay for because I'm like, I know that Claude or Gemini can give me a much better answer if I just had that layered over this data. It's like, why is your AI in this software we're paying for not as good.

53:20

Speaker A

Yeah, that might be a good, more narrow example to sort of stick on for a second is, you know, imagine you have a CRM system and you want to talk to your data and for whatever reason they have yet to build the agent into it. That makes that super easy to do. So then you're like, screw it, I won't pay you on a token by token basis or credits. However you want to charge me to use your crappy agent that doesn't give me the information I want. I'm just going to connect it to it to Gemini or Claude and I'll get the data myself and I'll just use that and I'm going to pay them the tokens. So that's a more realistic thing is like these narrow plugins or use cases where you're just going to like, I'm just going to always track the data myself. I'm not going to wait for this software company. And if you do that enough times over enough use cases, then maybe you don't need that software anymore. Yeah.

53:47

Speaker B

All right, question number 14. I'm nervous about others using AI agents irresponsibly while it's connected to my personal information. Is there anything we can do to protect ourselves from others who are basically implementing agents without guardrails?

54:34

Speaker A

Join the club on this one. I'm also very worried about this. There's. We'll share a story on Tuesday in the Weekly about like an all hands meeting at Amazon when their agents apparently went haywire and racked a bunch of code at aws. These things are not reliable and in many cases they are not safe and people are definitely racing ahead and using them regardless of. And we, we all could be collateral damage in that grand experiment. I don't know how to protect ourselves other than the traditional ways we would monitor our personal information and our credit scores and things like that. I, you know, having those Fraud alerts set up personally, like it's probably a lot of the traditional stuff. It just becomes more important over time. I would imagine from a business perspective you probably want to be talking to your insurance agents about liabilities related to these things and how to protect yourself and your employees if maybe your employees mistakenly use agents to do things. Like it's a whole new world. And I think even asking this question is a good starting point for people.

54:47

Speaker B

All right, our last question here. Paul, question 15. When you're having conversations with leaders, what's your approach to communicate the fact that it shouldn't ideally be the one driving AI adoption in the organization.

55:56

Speaker A

It's not it's job. It's job is not business strategy. It's not reskilling and upskilling people and dealing with change management. This isn't a technology thing. This is a complete business transformation that has to be fueled by AI literacy, you know, from, from the leadership level down. And then you have to be able to personalize use cases. You have to be able to communicate with people who are afraid. Like, like there are so many layers to this that have nothing to do with is there to keep people safe, to use the technology responsibly, to reduce risk, to make, make sure the data stays secure. Like they play a critical role. They are not the ones that should be telling marketing how to use Google Gemini. Like that's not their job. So I, I just think you lay out what are the goals of our use of, of AI technology, what are the sample use cases, how are we going to infuse it into workflows? And none of that is it's job. So I would just like lay out what needs to happen for true AI adoption and transformation. And it's very apparent at that point that it plays a critical role but it is not to guide the strategy.

56:11

Speaker B

All right Paul, that's our 15 questions for this episode of AI Answers. Again if you need a reminder, go to SmarterX AI forward slash webinars and you can check out each of these awesome webinars. We did go to SmartRx AI forward slash blueprints. You can go get non gated copy of each of these blueprints for marketing, sales and customer success in partnership with Google Cloud. We're so appreciative to them for making all this possible as well. So Paul, thanks for my first AI answers

57:25

Speaker A

but just FYI like Kathy will be back for the next AI answers but Mike and I did these webinars together. That's why Mike and I ended up doing this AI Answers together. So Mike and I will be back with episode 203 of the podcast on Tuesday. So thanks for joining us for this special edition and we will talk to you again next week. Thanks for listening to AI answers. To keep learning, visit SmarterX AI, where you'll find on demand courses, upcoming classes, and practical resources to guide your AI journey. And if you've got a question for a future episode code, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions about AI.

57:56