The Artificial Intelligence Show

#204: AI Answers - What Should Stay Human, AI Pricing vs. Labor Cost, Leapfrogging Digitalisation, Getting Legal On Board & Do Reasoning Models Actually Reason?

59 min
Mar 19, 2026about 1 month ago
Listen to Episode
Summary

This AI Answers episode addresses 16 questions from business professionals about AI adoption, covering topics from career transitions and pricing models to AI reasoning capabilities and organizational change management. The hosts discuss practical challenges like billing models for AI-enhanced work, legal stakeholder management, and the future of human creativity in an AI-driven world.

Insights
  • Hourly billing models are becoming obsolete as AI enables rapid problem-solving, requiring a shift to value-based pricing
  • AI adoption requires C-suite buy-in and proper change management rather than bottom-up implementation
  • The future of work may involve humans orchestrating swarms of AI agents rather than doing individual tasks
  • Organizations have a choice to use AI productivity gains for employee wellbeing rather than just increased output
  • Human creativity will differentiate through storytelling and imperfections rather than technical perfection
Trends
Labor replacement cost pricing models for AI servicesSwarms of AI agents working collaboratively under human orchestrationLeapfrogging traditional digitalization with AI-native processesShift from hourly billing to outcome-based pricing in consultingAI-first organizational structures replacing traditional hierarchiesPersonalized AI training based on employee sentiment analysisHuman preference for authentic creative work over AI-generated contentIntegration of AI literacy into fundamental business skillsLegal and compliance as enablers rather than blockers of AI adoptionManufacturing and traditional industries accelerating AI adoption
Companies
SmartRx
Host company providing AI education and training services
Google Cloud
Sponsor of AI Answers series and partner for AI literacy project
McKinsey
Example of AI security breach through exposed APIs in internal system
Amazon
Experienced 13-hour infrastructure outage due to AI agent issues
HubSpot
Referenced as platform potentially enabling agent swarm functionality
Anthropic
Discussed for Claude's constitution and consciousness research
OpenAI
Referenced for ChatGPT and GPT building capabilities
People
Paul Raetzer
Host and founder/CEO of SmartRx, marketing AI institute
Kathy McPhillips
Co-host and chief marketing officer at SmartRx
Ilya Sutskever
Referenced for Atlantic article about swarms of AI agents
Demis Hassabis
Google's mission to solve intelligence and then solve everything else
Elon Musk
Example of building AI models with specific truth-seeking philosophies
Quotes
"The imperfections is probably what ends up making the human creativity so special in the future. And the stories behind how they learned their craft and what goes into their craft. And AI's just not going to have those stories."
Paul Raetzer
"I just don't understand how billing by the hour is a thing anymore. Like, it's. The value exchange is just broken."
Paul Raetzer
"There isn't a publicly traded CEO in the world that can say, no, thank you. They have a fiduciary responsibility to do that."
Paul Raetzer
"AI can give us the greatest gift of all, more time. Or it can just be another technological revolution that expands our work, fills our hours and leads us down the path of never ending productivity gains or profits. We get to choose."
Paul Raetzer
Full Transcript
2 Speakers
Speaker A

The imperfections is probably what ends up making the human creativity so special in the future. And the stories behind how they learned their craft and what goes into their craft. And AI's just not going to have those stories. Welcome to AI Answers, a special Q and A series from the Artificial Intelligence Show. I'm Paul Raetzer, founder and CEO of SmartRx and marketing AI institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving, moving world of AI. But we never have enough time to get to all of them. So we created the AI Answers series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing. Whether you're just starting your AI journey or already putting it to work in your organization, these are the practical insights, use cases and strategies you need to grow smarter. Let's explore AI together. Welcome to episode 204 of the Artificial Intelligence Show. I'm your host Paul Raitzer along with my co host Kathy McPhillips, chief marketing officer at SmartRx. Hello Kathy.

0:00

Speaker B

Hey Paul.

1:12

Speaker A

Mike did the AI answers with me last time, so welcome. He did. Welcome back to the AI Answers seat.

1:14

Speaker B

Very sad to miss that opportunity.

1:19

Speaker A

I know. I heard from people, it's like, where's Kathy? She does AI Answers. So if you haven't figured it out by now, this is a special edition. This is our AI Answers series. This is the 16th episode of the AI Answers series. So if you are expecting Mike as our regular weekly co host, Kathy is my co host on the AI Answers series. So this is a bi weekly roughly we do these. AI Answers is presented by Google Cloud. So it's a series based on questions that we receive from our monthly Intro to AI and Scaling AI classes along with some of our virtual events like webinars and summits. So if you're not familiar with Intro to AI and Scaling AI, we will put links to both in the show notes. They are both free monthly classes that I teach live with Kathy. So Kathy moderates the Q and A during those. Thus my co host for the actual podcast. Because what we do is we take questions we couldn't get to during the regular live episode and we record them as a podcast. So today's is Questions from Intro to AI 56 that we held on March 12th. So again, these are free classes. You can register at any time. The landing page is always live. You can register for the next one and then we do on demand access for about seven days, I think, afterwards, Kathy. And then we just do it again, redo it. So special thanks to Google Cloud for sponsoring this series. This podcast series is part of our AI literacy project that we launched in January of 2025. We have an amazing partnership with Google Cloud and their marketing team. In addition to sponsoring this AI Answers podcast series, they're our partner for the monthly Intro to AI and five Essential Steps to Scaling AI classes that I mentioned, as well as a collection of AI blueprints. We in February launched AI for Marketing, Sales and Customer Success Blueprints. So if you haven't checked those out, those are a great asset. We also have on demand webinars for those, and that was in partnership with Google Cloud. And then we have AI for CMOs coming up. That is also part of our Google Cloud partnership. So you can check out google cloud@cloud google.com and you can learn all about them and then check the show notes for links to all the other resources that I have mentioned. Okay, so, Kathy, you'll give us the rundown of how this works, but basically just unscripted responses to questions, much like we do live. I have not looked at these questions yet, so they're going to be new to me as they are new to you. So, Kathy, take it away and see. Cover anything I may have missed already.

1:21

Speaker B

Yeah, these are good questions. Today I was going through them. Claire helped us get them in this place. I went through. And I was like, oh, that's really tough. And I'm like, oh, that's really tough too. Paul's gonna have a great time tonight because if you.

3:49

Speaker A

If you're a regular listener. I just recorded the weekly episode with Mike. So Mike and I were just on for. It ran Kathy, like an hour and 40 minutes, like the weekly was. And I cut in real time. I cut, like three of the topics. I was like, mike, we're just not gonna get to these. So I'm like, in the Google Doc, like, skip, skip, skip. Like, we're running out of time. So I have. I have been in podcast mode all Monday morning. We were recording this on Monday, March 16th, right after I got done recording the weekly. I think I'm good. I had to deal with some tax stuff right before I counter. Like, I gotta get in the right mental space. So I'm.

3:59

Speaker B

Well, here we are.

4:31

Speaker A

I'm ready.

4:33

Speaker B

Whether you like it or not, here we are.

4:34

Speaker A

We're doing. I'm never doing on Monday. Like, just get all my good thoughts out for Monday. Yeah, I got to recharge for our annual meeting on. We're doing annual meeting.

4:35

Speaker B

I know, I'm really excited about that. I need, I have a lot of prep I need to do.

4:42

Speaker A

You do? You should see my list.

4:45

Speaker B

Okay, we're going to get started. And here's something exciting. This that you might not know. We have three questions from YouTube this week.

4:48

Speaker A

Really? I avoid YouTube comments and questions usually.

4:55

Speaker B

Well, they're good questions. That's why they're here. Claire pulled them out and is like, these are legit good questions. Okay, let's get started. Number one, how can someone with a marketing background transition into the AI space without a strong coding background? Are there roles in AI marketing, product management, or AI strategy? I guess that applies for any industry.

4:58

Speaker A

Yeah, I don't. I mean, increasingly, the coding background is just not necessary. I mean, coding in, in many ways now, instead of like people who are expert coders, like, it's not like they, they don't have a role in what they're doing. They're just getting superpowers with these AI systems. But for people who aren't coders. Yeah, I mean, there's a world of opportunities in marketing. If anything, it's enriching traditional marketing roles to enable coding with it. You know, it's like me and Kathy, who traditionally have no coding ability. We can now go into something like lovable or Claude code or Google Gemini, and we can, we can do coding using natural language. So, yeah, I think that there's tremendous roles to blend what's always been done by marketers and to enhance it now with coding capabilities of these different AI models.

5:17

Speaker B

Great. Number two, we talk a lot about upskilling and reskilling in your current role. But if someone is currently unemployed or looking for a job, what are the best parts of AI to learn to become AI forward or a stronger candidate?

6:03

Speaker A

Prompting, you know, learning how to talk to the machine is critical. You know, learning how to treat it like a collaborator, a thought partner. So the first thing I would focus on is your prompting abilities and being able to use these tools to not only enhance the outputs you create and you're creative, but your critical thinking. So that to me is the, you know, the first step. And then I guess, like, prior to that, it's just the basic understanding, like a deep understanding of AI and what it's capable of and the different features and apps that are within these different platforms. And then the other thing that would make you stronger candidate, I think is a demonstrated competency so that you're, you're using the tools. And if you're unemployed. You can gain this through, you know, going and getting certificates to, to show that you're continuing your learning education. But then building gems in Google Gemini OR GPTs and ChatGPT, developing skills in Claude like do things and it can be in your personal life and you know, a health tracker or a travel agent, you know, a trip planner, like just do things with it. So when you're sitting there, you know, we asked this question in our interviews. How are you using AI personally and professionally? So when you know the personal side is probably where you're gonna have to focus on at the moment, you can talk about it, how you're using it with, you know, to help your kids with your homework. I use guided learning in Google Gemini to help your kids with their homework and how you've built, you know, these different tools. So I would say just build things, do things. Like the barriers have come down for all of us to actually create really interesting things and to demonstrate your ability. Like one that comes to mind for me right now is like a wealth manager, like you know, can I reference upfront, like having to deal with tax stuff this morning? And so you just like my mind is on the financial side and so like as my life, you know, as I get older and different complexities related to like the numbers of businesses I own and how those businesses are structured and my personal taxes and the business taxes and you know, investing for my kids future, like getting ready for college in a few years. Like all these things, it's like oh, I, I think I need to talk to somebody. Like it's becoming more complicated than I can handle and I'm like, or I just need to build a gem. Like I need to, it's for a starting point. I'm not saying I don't need like a professional advisor, but I think most of the questions I have are, are things that I could probably just talk to a Google gem about. And so I'm like that's kind of example like just build things, do things that provide value to you in your life and demonstrate your competencies.

6:17

Speaker B

And the other thing is to maybe use it in something you know really well. So when you see the output you could learn how to better prompt it. Say like, oh, that's actually not the right answer or it could have been a richer answer and things like that. Okay, number three, for contractors or consultants who bill by the hour, how should they think about time spent experimenting with prompts or building AI workflows? It can be hard to justify billing for tinkering Even if that experimentation ultimately replaces hours of manual work.

8:41

Speaker A

I just don't understand how billing by the hour is a thing anymore. Like, it's. The value exchange is just broken. Like, you know, if I'm paying an advisor, it can be a legal advisor, HR advisor, consultant, whatever it is. I'm not paying for their hours. Like, I'm, I'm paying for the output or the value that they create. And I know they can work more efficiently. And I don't, like, I want a fair value exchange. Like if, Kathy, if you were functioning as a consultant to me and I came to you and there was like a high value thing I wanted you to help me solve, and you solve it in 25 minutes because you gave the right prompt and then you verified the output and you put in your own expertise and context. And like an hour later you email me, say, okay, here, I think I've solved it. Here's what I would recommend. Here's the output from it. It's eight pages. I've summarized it into these three points that answer the question you were looking for guidance on. Should I pay you for 25 minutes? No, I should pay you because you solved a problem for me. The trick comes in with people who have always just charged by the hour and it's the only way they really know how to do it. But I just, I, I don't know. Like, I don't even know how to answer this question because I just don't think Bill Blowers is a viable way for anyone to be charging for work anymore. Should you charge them for experimentation with your prompts and stuff? If it's specific to that project, like, if. But again, as the person hiring that contractor consultant, I'm sort of expecting a level of AI literacy and competency when it comes to prompting and building AI workflows. And it's part of why I'm hiring you. It's like I, I assume you're using AI to sort of superpower give you superpowers in these things. So I don't know, I just, I feel. But again, like, I wrote a book in 2012 where I said eliminate Bill Blowers was. That was the title of Chapter one. So I'm, I'm kind of like, I have some bias in the game where I've always thought Bill Lauers was a inefficient system for both parties. I understand in some cases they're just sort of essential, like there's no other way to do it. But I would just say, like, anywhere possible, just get rid of Bill blowers and charge for the reasonable value of the output you're creating or the problem you're solving. It's always going to be a better scenario for everybody.

9:13

Speaker B

Yep. Number four, there is lots of talk about AI increasingly increasing productivity, but how do we make sure we're not quietly losing things like weaker thinking over reliance and poor decision making behind speed and efficiency? At what point does AI stop being a benefit to a business and start damaging it?

11:44

Speaker A

I think about this one a lot. I actually wrote a little bit in a related way in my Exec AI Insider newsletter this, this past Sunday about this idea of, you know, increasing productivity should in theory be giving us back time. So I was more focusing on the time aspect of it. But there is this challenge where we become too reliant on it and we lose the. Especially the training grounds for younger employees is something else I've been thinking a lot about is how do we create job opportunities for entry level employees and associate level, but also how do we develop them to be the experts? You know, we've all become after doing things for decades in this industry or, you know, years, depending on how long you've been at it. And that's. I think the challenge is there's no solid organizational structure change management process that I have seen yet or heard of within a company that is properly addressing this. And the same challenge is faced in schools right now where the shortcut is there for all of us. It's like, I got to do this thing. Like, I'll just give it a few prompts and I'll get to the output. And it's easy, especially at a younger age if you're not properly trained to just think that that's enough. And I did the work and I created the output. And yeah, I checked it, I verified, everything seemed good. But you're not deeply thinking about the topic. You're not gaining confidence to present the topic or argue a position based on the topic or the output. And so yes, there is always an exchange and we have to be very intentional as leaders to not let that slip. Especially if you're in an environment where you're being pressured to reduce staff and it's just going to pile the work. Like if I came to you, Kathy, like, let's fast forward and say we had a team of 200 people and you're leading a team of 40 marketers. And I'm like, hey, Kathy, like we don't really need 10 of them or 20 of them. Like, let's cut that staff. Well, doesn't reduce Your workload, like, if anything, it's now going to, like, create more for you to do. Because now the people who are left are just going to be using AI even more, and they're just going to like, push all that work up to you. And it's like, that sounds horrible. So I don't know. I think this is a. It's sort of a new frontier that hopefully more companies are at least starting to think about this. But like I said, I just haven't really seen elegant solutions to this yet.

12:03

Speaker B

Okay, number five, for people in creative industries like fiction publishing or fine arts who see AI as a threat to human creativity, what's the most useful reframe you've seen actually shift that mindset?

14:17

Speaker A

It's interesting. We actually talked about writers and AI on the weekly episode this week. So episode 203, if you haven't heard it yet, was I touched on this and I actually shared some of the insights, Kathy, from my AI for Writers keynote from 2025, where I was basically explaining that it's a choice. Like, I think that increasingly writers, artists, musicians, like, we're all going to have to come to grips. And I say this as a writer, that. And my wife is fine arts major, is a painting major. I say this as like, we have to come to grips with the fact that the AI is going to be able to create at a human expert level in all domains. Like, it's going to be somewhat indistinguishable on the surface. You won't know if the human did it or if the AI did it just by looking at it necessarily, or hearing it. And this was in relation to a New York Times article where they were comparing AI written outputs versus human written outputs and letting people vote on which, like a taste test, basically. And in essence, that's what came out is like people can't tell the difference and they actually prefer the AI written stuff over the human stuff. So there's more conversation on episode 203 about that. But I think the key message I had in that episode and what I had in the keynote was that we get the choice. Just because AI can do the thing doesn't mean you have to let it. And I do think that there will increasingly be human preference towards content and creative that they know came from a human. So while we might not be able to distinguish it on the surface, if I show you A and B, whether it's artwork or, you know, give you A and B from musical standpoint, or A and B from a text output, and then I Tell you the story behind the human creator and the fact that I did this. With these three prompts, your emotional connection will transfer to the human one. Most times, like, that's what you're going to lead toward preference of. You know, I was interesting. I was at my kids. My son was in a. A talent show this weekend at his school and called Saint Showcase. And I was just like, you know, I was listening to. The one kid is an exceptional piano player. He's played at Carnegie Hall. And so you're watching him play. And I actually found myself thinking of the beauty of that. And like, I would never want to sit there and watch an AI play a piano. And like, the imperfections of someone at that age who's so exceptional at their craft that I couldn't even notice imperfections if they existed. Like, I wouldn't even know if he played a key wrong. But just to watch that and know he's doing it on stage and it's because of all this training. It's like, I would like a hundred times out of a hundred, I will choose to listen to that or watch these kids do what they're doing versus watching AI do what they do. That's like just going to be perfect. So the imperfections is probably what ends up making the human creativity so special in the future and the stories behind how they learned their craft and what goes into their craft. And AI's just not going to have those stories. And I don't know, I'm very optimistic about, like, an explosion of, you know, human creativity and in some ways, human plus AI creativity. Like, I do think there's a space for that. I just think it has to be presented in an authentic way.

14:31

Speaker B

And to that point, you might look at AI art because you're interested in what the developments are, but that's not going to replace you looking at other art. So it's not an either or in your. In your case?

17:47

Speaker A

No. And I just think there's going to be a balance. There's going to be some people who love the AI stuff. It's. It's kind of like, I don't know, like, in some ways I could think about music might be a good parallel. Like, I love live music. Like, I. I love being at concerts, but I also love listening to live albums. Like, I want the rawness of that live experience. And like, the Unplugged Experience used to love that series on mtv. Like the Unplugged series. And so, like, I want that from the human creators. Like, I want the, the authenticity and the rawness of live experiences that you're just not going to get from the manufactured stuff. So not saying like, great, you know, all music isn't, you know, great, but like, even the introduction of all the other elements of music over the last couple decades, like, there's still just that something about that live performance that's just different to me than like a studio produced album. And I think like creative starts becoming that way. It's just like you accept that AI is helping create things and that's fine, but there's always going to be like, people who just prefer the, you know, the untouched. Yes. I don't know.

17:59

Speaker B

Yep. Number six, what if your organization doesn't have an AI approach and it's just a wild west, free for all, with everyone using whatever personal AI tool they want, however they want, how do you approach wrangling your team and moving forward?

19:03

Speaker A

So, funny enough, on episode 203 of the podcast we did, the AI pulse was actually somewhat related to this. And it asked like, how is it being used within the organization? It was like the dominant answer is, we're just doing whatever we want. Now. It's an informal poll. It was like 70 people or something, but I'm not mistaken. It was like 62% said, we just use whatever we want to use. So I assume that's mostly like small midsize businesses. I can't imagine too many enterprises are like that. You wrangle your team by actually putting guardrails in place and, and giving them approved tools. Like, too often we see that first misstep of. No one has actually given a structured platform to people who haven't like gone and got Gemini licenses and said, okay, everyone has access to Gemini and here's your standard use cases and we've built the first five gems for you and they're personalized based on your role or your department or team or whatever. So you have to approach it from a change management perspective and that has to come from the top. It's really hard to do that as the Chief Marketing officer, the head of sales, if you don't have support from the C suite to actually go about doing this and if you don't work well and align with it and procurement and legal. So it's just there are just steps that have to be taken. But I think access to the technology and then proper training are probably the first two fundamental steps that have to every organization needs to do.

19:17

Speaker B

Sure, I'm going to stop you there because I know you could go on. But I think number seven's question will touch on a few other things so you don't have to repeat yourself. Number seven, where have you personalized where have you seen personalized training done? Well, is there any advice on how to rethink training at an enterprise level? Digital literacy requires curiosity and experimentation at the individual level, and making personalized training scalable is difficult. Do you have any advice on that?

20:38

Speaker A

I can just talk to how we're thinking about it through our AI Academy and how we're trying to then help our partners, our clients work with AI literacy within their organizations through that. And I think the first step that we're really starting to guide people around is they need to do a survey of their people. They need to figure out where people are at with their comprehension and competency of the itools, but also where they're at with their feelings about AI. The reality is that there's just going to be people across teams and enterprises that don't want anything to do with it. And maybe they're a writer and they just hate it or they think it threatens their future. They're a graphic designer or video production person or, you know, an expert who feels like, AI just can't do what I do. Like, there's lots of different reasons why people wouldn't want the personalized training. So first you have to break down the human barrier of resistance to change and the friction that can come from whatever the reason is why they're not interested in AI or haven't already taken the initiative to do it. And then from there, what I often tell people is you have to help them realize the value through a use case that matters to them. And often that comes from a use case doing something they hate doing. So what I often teach people is go in and find, like, what are the parts of your job you don't enjoy? Like every week, what's something you do where you just don't look forward to it and then say, can we create a custom AI to help you with that thing? And so you can break down that barrier by finding something that creates value for them. And they don't even need to have a deep understanding of AI yet at that. They just need to start opening their mind to it. But if you don't start with that survey and that sentiment analysis of where people are at on their journey, and maybe many of them haven't started for different reasons, then you're not going to be able to personalize the training properly. Then, you know, once you kind of break those barriers down now, you can actually start to Think by, you know, so we think at least through academy of, like, fundamentals, like, what's the horizontal information? Everybody needs to know, like, the basics of AI. And then we'll get into, you know, identification of use cases and problems so that you can personalize your AI knowledge to yourself. But then we create AI for industries, so we try and attack it by, like, what are the different industries you might be in? Then we do AI for departments, so you can then go into marketing and sales and customer success. We'll do AI by business types, AI by different roles. And so we're trying to basically create a collection of resources that allow people to personalize their learning or allow admins to personalize learning based on where people are and where they want to go in their career.

21:04

Speaker B

Love it. Number eight, AI adoption moves fastest in areas that are already outsourced. It's essentially a vendor swap, but legal and privacy concerns are the primary hurdles. How do you approach legal stakeholders so they're seen as enablers of innovation rather than friction? Or worse yet, a roadblock? How do we frame it so all groups feel supported?

23:39

Speaker A

We talked about the outsource thing on, on the last weekly episode. Not. Not two. What was this morning's? 203? The two two or two 01? I'm losing track with all the AI answers episodes.

24:01

Speaker B

Oh, it was 201. You're right.

24:13

Speaker A

201. Okay.

24:15

Speaker B

Yeah.

24:15

Speaker A

So I talked about this idea that outsourcing is the most obvious thing because then it doesn't impact your people. But you've already proven that it's easy to, like, have someone else do the work. So having AI agents do the work that was previously outsourced is a very natural thing in terms of vendor swap. I see this as almost like separate, separate issues. But then the legal and privacy concerns being primary hurdles. You know, I think we've always say, like, you have to involve legal IT procurement from day one. Like, even if you have the autonomy as a leader of a department of your organization to go do your thing and, like, go get, you know, licenses for your teams, you still want to involve those different areas. You want to be aligned with them. You want to understand what are their. What are the areas where they're resistant to infusing AI and why are they resistant? You know, I think the more you work together and understand the perspective you're each coming from, the better you're going to be able to drive adoption and not run into those issues. So I often advise people, like, do an audit up front, like, sit down, talk with legal say where, where are you at with your understanding? Where are we at in terms of tool use and access to platforms? What are you seeing as your biggest concerns and what are the risks that the organization is watching for? And then how do we steer towards very low to no risk applications and use cases so that you can keep doing what you're doing at that macro level? And I think the more open you are and you're not just like butting heads right away, you know, pushing back on each other and you actually come to a point of understand, understanding. I, I think, I mean, this is how I just approach life in general is like, just understand each other, see where they're coming from. Something you might think is illogical might actually make a ton of sense if you understand it. Like, we shared the story today on, again, I'm saying today is like, I recorded today episode 203 that came out on Tuesday, because you're listening to this maybe on Thursday, about McKinsey and they have this internal AI system that somebody hacked because they left APIs exposed. And people got like this, this hacker quote unquote, on hacker got access to basically everything, like the internal weights of how McKinsey works. In essence, they access through this AI model, all the system prompts, thousands of accounts. And so there's a reason why it is hesitant. There's a reason why in, especially in a bigger enterprise, they move slow. There's lots of unknowns, there's lots of risks. And you have to accept that, like, we have to work together on things and not just like, assume the other person's position is absurd because there's usually something in the middle to be found.

24:16

Speaker B

Yeah, my son's actually dealing with this right now. He's in it and he'll make a recommendation and then he'll go to cybersecurity before he gives his recommendation to the group that's asking him. He's like, I don't know all this stuff. I mean, he's young, which, you know, but also he's like, there are a lot of other implications I don't think about.

26:43

Speaker A

Totally. Yeah. There's stuff on the cybersecurity side I don't want to think about. Like, I just. Until I have to deal with it. It's like, it's like taxes. Like, I just don't want to even talk to, but I know I have to. It's the reality of where we are.

27:00

Speaker B

Yeah. But take a quick break and talk to you about our state of AI for business report. I believe you talked about it earlier this week on the podcast. But as a reminder, we run our annual State of AI for Business report. This is our first year of doing for business. Actually, we've done it for marketing the past few years. It's an expansion of that report. So this year we're going beyond marketing specific research to uncover how AI is being adopted and utilized across the entire organization. And to do that, we would like all of your responses. So we're looking for thousands of business professionals across all industries and functions and we would love to have you be one of them. If you've already taken it, pass it on to your team. We'd love that. It takes about five to seven minutes to complete and in return, you'll get a copy of the full report before it goes live, before it drops, and a chance to win or extend a 12 month SmartRx AI Academy mastery membership. So go to SmartRx AI survey to share your input. So we thank you for that. And on to question number nine. How do you see AI adoption rates pick up in traditional industries like manufacturing? And it seems like these organizations have a long way to go to realize the potential. And I don't know if I agree with that question. Before you answer, I think manufacturing actually is doing a really good job. Do you have more insights?

27:13

Speaker A

I'd say there's probably segments of manufacturing that are doing a really good job. The only parallel I can draw is back in when I owned my marketing agency. So I had an agency for 16 years, sold it in 2021. There was a good portion of the time where manufacturing was our largest segment, the largest industry we worked in. It was like 25 to 30% of our revenue at one point. And there were pockets of manufacturing that were racing ahead in CRM adoption. So I'll put this in the context like digital transformation and CRM integration and things like that. And so there were definitely some that were all about it and we were driving HubSpot adoption within these organizations. And then there were others we would go in and their salespeople refused to stop working on yellow notepads or Excel workbooks. They would not enter information into a CRM system and they would never log into the CRM system. So I just, I think that there's some industries that are naturally slower, but again, you can't, can't make these industries like these homogeneous groups. Like there's always segments within verticals, within these industries that are probably doing this really well. I think any slow adoption, regardless of what the reason is, you have to create a Sense of urgency at the highest level. If the CEO isn't bought in and if the board isn't pushing for change, then everything's going to happen in silos and pockets and the marketing team is going to race ahead while everybody else is left behind. That's a common thing within AI adoption right now. So I think you have to find the trigger points that get them to move. You have to know what is the thing that drives the CEO to make decisions to put prioritization on different, you know, change management issues or growth initiatives, whatever it is. And that could be an executive briefing with someone they trust. It's like get someone in from the outside who can sit there and say, here's what's going on. This is, I do a lot of this. I'll go in and do like state of AI executive briefings for teams. And sometimes I would say the leadership is unsure. Like you'll go in especially bigger enterprises that are slower moving, like, you know, financial institutions, healthcare organizations, manufacturing organizations. And you're going to have these AI champions internally who are doing everything and they're excited and they're moving fast. And then you're going to have the rest of the people in HR and operations and different areas, which is like legal, where they just don't see it. They don't get the application to what they're doing. Nobody's personalized use cases for them. Nobody's given them the this is moving really fast talk and it's going to change our business. And so sometimes like that's what it takes is just, you know, it can be in one hour, two hour executive meeting where you just have open conversations, let questions be asked in a setting where they don't, you know, they're not gonna be made to feel stupid that they don't know the answers. Like it's okay to just have honest conversations, but you gotta, you gotta know, you gotta know what moves your management team. And I'm just increasingly of the belief that that has to be C suite driven. Like it's gotta come from the top. Those people have to be convinced of the need to prioritize AI transformation and the urgency with which they need to do it. Otherwise they're just gonna lag behind regardless of what industry it's in.

28:24

Speaker B

Yep, number 10, my company is behind with the digital digitalization of HR processes. Now AI is here. While we still face the challenge of this, is this actually an opportunity for leapfrogging and if so, how.

31:22

Speaker A

Yeah, I suppose it can be. I mean, we're definitely spending More time ourselves thinking about what does the future organizational chart look like. There's anytime you deal with this, like significant change where it's, you know, the hiring process is different, the talent evaluation process is different, what the org chart looks like, what career paths look like, just even actually over this weekend, I shared a little bit with the team internally on Friday with like this kind of thought process I was going through. I'm not going to get into that now, Kathy, but I think a lot about this and I do think that there is an opportunity to actually kind of probably reimagine what we all do, like what we're trained to do and the role we're all going to play in the future of work and how at the different stages of like associate to manager to director to VP to the C suite, what those roles look like. I'm actually. My theory, again, without getting into too many details here, my theory is that increasingly I don't think the roles we have all grown up with is. I don't think they're going to look anything like they do now in three years. And I do think that there's a chance. And I actually was debating if I could do this by our annual meeting together on Thursday with our company. I think I have an idea of what it might look like instead, which I've been searching for for a few years. And I. So I do think that there's a leapfrog opportunity because the stuff I was working on even as of like this morning, it's just a, it's a very different way of thinking about work and roles and career paths. And that would then translate over to our own HR processes and how we recruit people and what the interview process looks like, how we do assessments, how we guide people through their careers, how we accelerate their ability to gain experience and expertise when AI is doing a lot of the entry level work for them that it didn't used to do. So, yeah, I'm very, I don't, I, I don't have the answers I can share with you right now. I'm like the how part of this. But I do think that there's gonna, there's gonna be a way to do this in a pretty transformational way. And I think enterprises are gonna have to figure out how to find a middle ground because major change is never something that humans are huge fans of. But again, that's why I always say, like building an AI native company, there's just never been a greater time because you can just do things like you can just say, hey, we're gonna, we're gonna approach it this way. Here's what the titles are gonna be, here's what it means, and here's, you know, the skills and traits we're gonna look for. And we're gonna infuse this into training right now. And starting next week, we're gonna do this. Like you can do that stuff when you're young and kind of forming what the organization looks like. You can't just flip a switch and do that with the big companies.

31:38

Speaker B

Yep. Number 11. This was mentioned in class, but I liked it a lot. If AI begins replacing large amounts of human labor due to cost advantages, should we expect AI labs to eventually price their products closer to the labor they replace rather than the marginal cost of the technology?

34:31

Speaker A

I said this on, I don't remember what episode it was. It's been in the last 30 days, one of those episodes. I talked about the idea of labor replacement cost. And I actually think it's one of the most logical. Well, I don't advocate for it. I, I do think it's one of the most logical pricing models. I don't think legacy software companies will be able to do it in the near term because it's a PR nightmare to position your products that way as labor replacement. But I do think AI native companies like Mechanize and others are 100% going to do this. They're, they're going to say, hey, you have five customer service people right now and you're spending $800,000 a year and they're resolving X number of tickets per day and your response time is this. We can triple the number of tickets per day that they can output or they can do. We can cut response time in half at least, and we can do it with one agent for $250,000 a year. If that is true, if it actually works that way, there isn't a publicly traded CEO in the world that can say, no, thank you. They have a fiduciary responsibility to do that. That is very messy. Like what that means economically, what it means to Jobs is extremely messy. But I do think that the best pricing model for AI is outcome based and value based. And this idea of metering it and treating it like a utility and credit pricing, I think is, will have seen its day very quickly because it's very abstract, it's impossible to budget for. And on the buyer's end, on the company end, I know that the cost of compute is dropping 10x every year. And I'm going to get really pissed if the Companies we're relying on aren't passing that savings on to us. And I'm just going to go build my own versions of things or I'm going to go find a native company that's willing to price it differently. So yeah, I think there's a lot of disruption coming to pricing and I do think labor replacement cost is actually the most logical way to talk to finance and hr. It's the thing that makes the most sense to them. You go talk to a CFO and say, yeah, we're going to charge you credits and you might use a thousand, you might use 50,000, but if you cap it at a thousand, then your AI is going to get shut off after seven days. It's like, or you're paying, paying, or

34:50

Speaker B

you're paying to upgrade to a level that you don't want.

37:21

Speaker A

You're just going to constantly bump up that credit limit. And they're like, excuse me, no. So instead if I say, hey, listen, you, you have unlimited for this level of output. It's basically doing the work of 10 marketers. It's going to cost you $50,000 a month. And you say, okay, like if we're, it's, it's how they're going to think. Like, it's, it. I don't know how it. Isn't that the eventual way they do it? It just seems too obvious that they have to find a way to do it. It's just very messy and it's not going to look good.

37:23

Speaker B

Definitely number 12. You touched on this on last time's AI answers on episode 202. What exactly is a swarm and why does it matter for how AI systems will work in the future?

37:53

Speaker A

Yeah, swarm is just like an informal way to explain a bunch of agents working together. I think for me it probably came. There was a Ilya Sutskova article in the Atlantic years ago where he talked about swarms of agents. And I think it just is a term that's always in my mind. I don't even know that it's like a, like the fully accepted term in the industry. But in essence it just means like I've said a symphony of agents. I've said a swarm of agents. It just means like, let's say I have 10 different agents. And if we stand like a marketing example, I've got my email agent that does the writing and segmentation of the mail database. I've got my media buying agent that, you know, knows the go to market strategy and the goals and it actually figures out what markets to put Things in. I got my creative agent that does all the creative outputs, and it's trained with a little different system prompt, and it's trained on the brand identity standards. I've got my strategy agent that oversees the whole thing. So, like, you would have a marketing team. You basically have agents that are highly trained to do these specific functions, but then they live in an environment where they all collaborate and work together. And so Kathy could say, all right, we want to launch the new AI certification series. Here's access to all the previous game plans. Here's access to the data so you can go figure out what worked, what didn't in those campaigns. Now go do your thing. I'll check back in tomorrow night. And Kathy hits go. And the swarm of agents, like, start working together and they figure all this stuff out and they plan together, and maybe at some point they ping Kathy and say, can you approve this plan? Here's the steps we're going to go through. Great. Looks good. Okay, how about this? Creative. Do you like this direction? We'll create 100 variations of this for all the different channels. Yep, that sounds good. We've got an opportunity to do, like, it just does things, and then Kathy just orchestrates those things. And so that's what I mean by a swarm of agents. And I understand that that sounds really weird, but I think I said on a recent episode, like, I do believe that by the end of this year, there's going to be many instances of, like, early adopters, people who are on the frontiers, who are running their marketing, sales and success teams in this way, where the human is largely just orchestrating these groups of agents working together.

38:04

Speaker B

Yeah, I'm looking forward.

40:09

Speaker A

And it sounds amazing and it sounds terrifying at the same time. It creates all these dominoes of. What does that mean? What happens to jobs like that? That's the challenge.

40:11

Speaker B

And just the interconnectivity of one thing. If it's doing this for this agent and this for this agent, you really need to watch what the process.

40:21

Speaker A

Yeah. And we actually saw in episode 203, we talked about AI agents gone wrong within Amazon and AWS. They had. They had some major issues recently where it was like some odd written code that seemed to be messing with some stuff and took down their infrastructure for like, 13 hours, which isn't good if you're Amazon. But you can see this scenario playing out with this example. It's like Kathy's overseeing these 10 agents and they seem to be doing everything and it's going great. And, like, you know, you're spot checking stuff and it all looks really good. And then like there's this one little detail that you didn't realize the media buying agent was doing and all of a sudden like it goes haywire and it starts putting your ads somewhere they shouldn't be or it starts spend it you didn't think it had access to or it accessed a database it shouldn't have accessed. And like who knows like all the things that could now go wrong that you have to contingency plan for. And again I don't know of anyone who is doing that like thinking about using these AI agents working together. Yes, lots of people. The contingency plans and scenario planning of what are all the things that could then go wrong in that environment or what does it mean to our staff and our hiring plans? And because if you, you know, just play this out like say let's say theoretically this is possible that by the end of this year there's going to be a bunch of like AI native startups that are going to allow us to just go hire these agents that just do these things that I just explained or HubSpot enables it within their platform. Then you go to HR and say hey, we're going to put all these agent swarms in place and they're going to do the work of the people that are doing this in marketing today. And we actually need to start hiring people who can just orchestrate these swarms of agents. HR is going to be like what, what are you talking about? Like and because HR doesn't even know these things are a thing and they don't even know what an agent is. And like it's just. That's why I don't think, even if it's technically possible, which I actually do think it will be in many departments and teams, it does not mean that we will see these things being rolled out across industries because the, the downstream effects of this happening are so immense and we are so unprepared that I don't know that like work will look fundamentally different to most people even if it's technologically possible.

40:31

Speaker B

Yeah, I mean I'm just thinking about like things like other human skills we that maybe are trainable, you know, like management and how to, you know, how do you teach someone due diligence and how do you like processes, all of those sorts of things seem like they be critical right now.

42:47

Speaker A

More than that's the stuff I've been thinking about a lot which I hopefully I'll have something to share later. I don't know like I. I think that the thing I floated to the team on Friday, it's like, I might just do as, like, a Mekon keynote. Like, I'm pretty sure by October I could, like, flesh all this out, but I don't know.

43:01

Speaker B

I'm gonna. I'm gonna say to everyone, buy your make on tickets. You can hear it.

43:15

Speaker A

Yeah, it'll definitely be in. It'll be a part of. It might be the talk at Macon. I don't know. I don't know. I made progress this morning. We'll see. I'm really excited about the direction. I'm very. I have more questions right now than I had this, like, started Friday night.

43:19

Speaker B

Okay. Number 13. Do reasoning models actually reason, or are they simply predicting the next word and then rationalizing the answer after the fact?

43:34

Speaker A

It's more of a philosophical question. Honestly. I. I answered something like this recently. I think I said something to the degree of, like, we don't really know how humans reason. Like, we don't. Like, we don't really know. We know. Humans go through system two thinking, where we go through this chain of thought, and we go through this reasoning process to play out scenarios, and we infuse our own experience and we observe the world around us. All this stuff goes into our thinking, and then we make these decisions. But some people would argue that the human process is actually not that different than what the machines are doing. So I don't know the answer to this. And I would say that the AI labs themselves and leading AI researchers aren't 100% clear on the answer to this. There are definitely some people who think it is as simple as this next token or word prediction. And it does that just in thousands of times per second, and then it enables it to go back and check itself and verify the predictions, and that's all that's happening. It's just mathematics in essence. And then there are other people who are like, I think it's aware of its thinking. Like, that it. It's conscious or sensitive, sentient, has metacognition. Like, it's aware of its own awareness in essence. And I don't know. I don't know that we're ever going to get a definitive answer. Like, at some point, we. We might. But the reason I said this might be philosophical is then you start getting, what is consciousness? Like, whoa. How do we know we're aware of our own awareness and thoughts? And so, and I'm not trying to be, like, funny on this. Like, this is truly, like, conversations that actually happen. Anthropic probably talks more about this than most AI labs about this idea of like the model being anxious or having consciousness. And I think the most recent thing I saw was there was like a 20% chance they think it's actually aware of itself and aware of when it's being tested. And so we don't know. But I think it's like anything else. Like it. All that matters to me is it simulates it really well like that. We're even debating whether it knows what it's doing or not, or it actually is reasoning or not. And my argument is it doesn't matter. Like, it. I don't think it has actual empathy. I don't think it really feels anything toward its users, but it simulates empathy extremely well, better than some of the humans I know. So the fact that it can do these things, even if it doesn't do them like a human would, the fact is it's still doing it and it does it like a human. Like it simulates the outcome that a human would be able to execute. So, yes, I don't think we're ever going to have a great answer.

43:43

Speaker B

Us reasoning and then rationalizing. Is 100 a human thing?

46:25

Speaker A

Well, yeah, but I'm just saying, like, no, I'm agreeing.

46:30

Speaker B

Like, it's just like it's the same thing we're doing. We'll make a decision and then we're going to rationalize it.

46:33

Speaker A

Right. But I can't explain to you, like, how I did it. Like, it's. I don't know. Yeah, it's just, it's a fascinating conversation. This is one of those, like, after, like a glass of scotch, like, I love to like sit around and talk about these things because, I don't know, like, but they're fun things to think about.

46:37

Speaker B

Sure. Number 14, if the FTC prevents one company from maintaining monopoly power, should the same philosophy for AI companies be applied to preserve variety in human thinking?

46:52

Speaker A

I don't know. I mean, I think there's legal elements to this, there's societal elements to this, there's the bias of who builds these models and the constitutions they put into

47:06

Speaker B

them and what they were trained on. All of that.

47:16

Speaker A

Yeah, I think that. Yeah. And again, this came up in episode 203. We actually talked about the government claiming that the fact that Anthropic Claude has a constitution and maybe is conscious is actually part of the reason they consider it a supply chain risk, which I didn't really understand that logic, but. But anyway, like, yeah, I think that right now, I mean, the government runs the risk of mandating the type of thinking it, it wants in its models, not allowing them to have a variety of thinking and constitutions. Which is where some of the friction is coming in between anthropic and the Pentagon is that the government is trying to sort of, they want AI models to think the way they think. And so it should have the same philosophies and political leanings as the current administration is in essence what they want. And so they want labs that are going to allow them to take the models and have them output things the way they would. This is Elon Musk is a great example of this. His whole mission is like seeking truth. But it's his truth. Like it's very much like what does he think is true? And so he has specific sources and specific beliefs that he is absolutely putting into Grok because those align with his thinking. And I'm not saying that's fraud. It's his prerogative. He's building the model like he can have it do whatever he wants. And maybe that's the spirit of this question is should we allow that? Should, should there be some models that Republicans like better and some that Democrats like better? I, I, I don't know. Like, I think it's very dangerous that humans are the arbiters of truth. But that's also how it's always been. That's how media outlets work. They're in essence the arbiters of truth. Like there's, you could read one publication and watch another channel and like the same thing where it's the facts are very clear, could be presented in totally different lights, the headline is different based on what they want you to believe or perceive about something. So I don't know, I think what

47:19

Speaker B

if AI solved world peace? Like wouldn't that be amazing?

49:19

Speaker A

That would be incredible. Like I'd be all for that. Or got people to be able to have logical conversations based on actual truths.

49:22

Speaker B

Would be one could dream.

49:30

Speaker A

Top of my list. Yep.

49:31

Speaker B

Number 15. If AI can solve advanced math problems, why can't it solve the technological unemployment conundrum? Will technology eventually be able to solve the problems that technology itself created?

49:34

Speaker A

That is the bet the AI labs are making. So the reason it can solve the advanced math problems is because it's been trained to solve advanced math problems. They hire mathematicians, experts in math and they post train the models on math. The part of the training data is advanced math problems. So the reason it can do these things is because it has been trained to do these things. And the other thing is Math has verifiable outputs. So you can train the model on something and then know if it was right or wrong. And so that allows you to do training in a, I'm not going to like, simplistic is the wrong way, but in a more structured way. When it has verifiable outputs, there is no clear answer to the unemployment conundrum and therefore we, we don't know what the right answer looks like. And so the way you train these things is basically like imagine giving it a math problem or giving it a writing test or something and then like an expert human saying this is good, this is bad, or I prefer this over this. And in the math case again, it is like, okay, you got that correct. When we get into these bigger societal issues, there is no clear answer and therefore it's hard to know if what it's doing is correct or heading down the right path. Especially if it starts to get into more of a superhuman aspect when it comes to societal issues and issues of defense and security and philosophy. Like all these things, like there's just no clear answer. And if it ends up becoming better at us than solving these things, it's like, how does the human then even evaluate the output? It's like it's solving it at like 10 dimensional chess versus like what we're able to play. So the, the bet the labs are making, especially like a Google Gemini with, with their model is solve intelligence and then solve everything else. That's Demis Hassabis's basically mission in life is once we solve human intelligence or beyond, then it can then help us solve all these messy things including, you know, energy and environmental and disease and you know, biology and chemistry and all of these, these fundamental things. So I choose to believe he's right because it's the most optimistic outlook for what's possible. And I don't see any good in spending my life worrying about the doom and gloom of it all because I can't change any of the things that they're doing at the lab level. And so I spend my life instead focusing on raising awareness about the issues and creating a sense of urgency to understand them and figure out how they apply to you and your community and your family and your careers. And then believe that this is somehow going to work out in a very positive way because it gives me more motivation to keep doing what we're doing versus sitting here and just being a doomer and buying, you know, a bunker in New Zealand and just going and hiding from the world. Like that doesn't sound like fun to me.

49:44

Speaker B

Well, I had a last pick me up question just in case this was did not end on a really wrong. I'm still gonna ask it though. All right, so number 16 this past Sunday in your exec AI Insider newsletter, if you are not subscribed at SmartRx AI backslash newsletter, highly recommend it. You said as leaders, we have one chance to get this right. AI can give us the greatest gift of all, more time. Or it can just be another technological revolution that expands our work, fills our hours and leads us down the path of never ending productivity gains or profits. We get to choose what to do with the time we are given, both for ourselves and for our teams. And I tell people this line all the time because you said it before, like, what is, like, how are you feeling? Obviously you're feeling optimistic about this. Like, what are some things you can say just in regards to your newsletter message?

52:33

Speaker A

There's just a lot of chatter right now around AI productivity and how, like this negative belief that companies are just going to do what they've always done and any productivity gains we create is just going to create more work for everybody. And I definitely see that already that, you know, it's like, oh great, we're saving, you know, 20% of time. We're increasing output by, you know, 2x. Let's just get rid of, you know, 10% of the staff and then the people who are left can just do what they're doing and be superhuman and still work the same amount of hours or more because now there's more pressure on them to perform and do the work of five people. And so you're just seeing this slow decline or slipping down this cliff of like, we're just going to do what we've always done. We're going to take the gains of the technology and we're just going to fill the time with more work. And what I've always said to people when I get this question, it's like, well, isn't that what's going to happen? I'm like, if that's what your company chooses to let happen, like, I think we're just going to have a choice. And again, it does fit into that optimistic outlook. It's like, at some point there's a profit level, that's enough. There's, there's growth level, that's enough. And like, you have to make those decisions as leaders to say, we're happy now I understand that's not reality, you know, for publicly traded companies, like, there's just always going to be pressure to keep performing. But for privately held companies, which is, by the way, the vast majority of companies in the world, I do think we have a choice. And I like, we have our annual meeting. Like when you're, when this podcast drops, we'll actually be in our annual meeting that day. And I, I haven't talked to you, Kathy, about this at all, but I, I actually was working on something on Sunday. I was like, well, how do we model this behavior? Like, I don't think the answer, the obvious thing is like, we're just gonna work four days a week. I think that's ludicrous. Like, I don't think the four days a week thing is like a realistic thing. It's kind of like, and no offense to anybody who's still running this in their company, but it's like unlimited pto, like that was just a, that's just a ploy. Like that, that's not real. Like if somebody actually takes advantage of unlimited pto, they're going to get fired. Like, you have to work, like there's an expectation of output and things like that in the company and amongst your peers that you can't actually do something like that. And I think talking about a four day work week or a three day work week would fit in the grounds of it's just messaging. Like it's just a PR thing. Nobody's actually going to successfully do that. That doesn't mean you can't do things like Friday afternoons off during like summer hours or one day a month. We're gonna go and like volunteer as a team and like a community thing like you. Or we're gonna, we're gonna reduce the number of meetings by 50 and we're gonna actually have blocks where like no one's allowed to schedule with you. We're going to give you like time to work on work that you find fulfilling. Like, we're gonna do real things that make you enjoy your work more but do give some of the time. And so I've been, it was again like a random, you know, half hour I had of brain power to like work on this. But I think I may actually do this with our team on Thursday. It's like, hey, let's take 30 minutes and let's think about how do we model what an AI forward organization looks like? How do we give some of that time back that AI is giving to us? Because I do think it's a gift that we can take advantage of, but you have to be intentional about it. So yeah, I don't know I definitely put this under the bucket of things. I'm very optimistic about if you. If you approach it that way. But otherwise, you will totally just like you. And I feel it. Like, I could work seven days a week, 24 hours a day, and I wouldn't do all the ideas that are in my head right now. Like, there's so many things I want to use AI to do, and so I force myself to not do that and to, like, step back and, you know, take care of myself each day. Like, do something from a health perspective. A workout, a walk, you know, for. In your case, play pickleball. Like, gotta do those things. I force. I don't say force myself. Things I enjoy most is like, taking my kids to school, picking my kids up from school, being with my family at night, taking trips together. Like, I'm very, very intentional about not filling my time to allow for those other things. And I do work less now than I used to. I work a lot. But if I went back and looked at, like, what I was doing in, like, 20, 15, 16, 17, when I was running the agency and trying to start the institute, I was 100% working, like, way more hours than I work now, nights, weekends, everything. And I would say I've. I've done a fair job of giving myself time back, but I don't know, as an organization, we've, like, fully embraced it and operationalized it, and I think there's an opportunity to do that.

53:21

Speaker B

Yep, Absolutely. I'm all for that.

57:55

Speaker A

Yeah.

57:57

Speaker B

Let's figure it out. All right, that is the end.

57:57

Speaker A

All right. Good stuff. Good questions always. I mean, these questions get better every time we do these series. Like, it's amazing to me, like, the. How we keep doing the same class, like, variations of the intro scaling, and the questions just keep getting better. Like, it's incredible. So thank you everybody who attends these classes and asks the questions and YouTube comments. Apparently we're even getting good stuff now. That's nice to know. All right, good stuff.

58:02

Speaker B

Yeah. We'll see you next Thursday for scaling AI's AI answers. Oh, we have another one we set in next week.

58:25

Speaker A

Okay, so two more episodes next week we'll have our regular weekly episode, 205, and then I guess we'll have another AI answers in two weeks. All right, thanks, Kathy. Thanks, everyone for joining us. Thanks for listening to AI answers. To keep learning, visit SmarterX AI, where you'll find on demand courses, upcoming classes, and practical resources to guide your AI journey. And if you've got a question for a future episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions about AI.

58:32