#171: AI Answers - AI in Regulated Industries, AI Agents, AI Training, When AI Gets It Wrong, and Critical Skills for Early-Career Pros
Episode 171 of The Artificial Intelligence Show addresses 15 practical questions from business leaders about AI implementation in regulated industries, AI agents vs. generative AI, training strategies, handling AI errors, and critical skills for early-career professionals. Hosts Paul Raitzer and Kathy McPhillips emphasize that success depends more on understanding AI capabilities and managing organizational change than on the technology itself.
- Most organizations underutilize advanced AI capabilities like reasoning models and deep research—less than 5% of executives have tested these features despite 12+ months of availability
- Change management and organizational understanding are bigger barriers to AI adoption than technical limitations of the models themselves
- Personalization of initial AI use cases is critical for driving adoption—showing employees how AI solves their specific daily pain points creates immediate buy-in
- AI impact assessments must be ongoing and forward-looking, requiring understanding of where model capabilities are heading in the next 12-18 months to properly assess talent and workflow changes
- Early-career professionals should develop soft skills (curiosity, critical thinking, communication) and liberal arts breadth alongside AI literacy, not specialized technical skills alone
"Getting value out of AI right now comes down to asking good questions. Like, if you know the questions to ask of a chatbot, you can get a tremendous amount of value with $20 a month."
"The organizations that reimagine what's possible that think about growth and innovation, finding new markets, new product ideas, new ways to engage with customers... the underutilized part of these AI assistants today is the reasoning capabilities."
"If you're in an organization where they just don't fully buy in yet... you have to find the lever to pull to get them there. You have to know who will actually move the needle."
"It is not the limitations of AI that is the problem, it is the limitations of our understanding of what it's capable of doing that is often the problem."
"If you had ChatGPT in 2005, I could have figured out in 48 hours what it probably took me four years to learn. The learning curve is almost non existent."
Getting value out of AI right now comes down to asking good questions. Like, if you know the questions to ask of a chatbot, you can get a tremendous amount of value with $20 a month. Welcome to AI Answers, a special Q and A series from the Artificial Intelligence Show. I'm Paul Raitzer, founder and CEO of SmartRx and marketing AI institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners. Practitioners who are navigating this fast moving world of AI. But we never have enough time to get to all of them. So we created the AI Answers series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing. Whether you're just starting your AI journey or already putting it to work in your organization, these are the practical insights, use cases and strategies you need to grow smarter. Let's explore AI together. Welcome to episode 171 of the Artificial Intelligence Show. I'm your host, Paul Raitzer, along with my co host, Kathy McPhillips, chief marketing officer at SmartRx. Welcome back, Kathy.
0:00
Thank you.
1:13
We do this a couple times a month. This is. If you're new to the show, we do these AI Answers a couple times each month. This is in addition to our weekly episodes that drop on Tuesdays. AI Answers is presented by Google Cloud. This is a series based on questions from our monthly Intro to AI and Scaling AI classes that Kathy hosts with me. So if you're not familiar with those, each month we do a free Intro to I. We have now done 51. Kathy, does that sound right?
1:14
52.
1:41
52. All right, I'm losing track. So we started that in fall of 2021. So a year before ChatGPT, we started teaching an Intro to AI class. We've had 45,000 ish have registered for that class through the years. And then the scaling AI class, the five essential steps to scaling AI, we started doing 2024 sound right, Kathy? Yes, 2024. And we just did number 11. 1111. All right.
1:42
See, I think it might be. I think 52 intro is our next one.
2:09
Okay. All right, there we go.
2:12
They all blend together after a little.
2:13
While, they really do. So the format for those is I, I present and I kind of update it each month, but we present, you know, roughly the same class each month. It's, it's part of our AI literacy project just to provide free education and training for people to get them, you know, introduced to artificial intelligence, help scale it responsibly. Within organizations. And so when we do these, we usually have Intro to will normally get 1200 to 1500 registrants. Scaling AI is usually in the 6 to 800 range. And so we get great attendance on these and we get tons of incredible questions and there's often more questions than we can get to in the live class. So what we then do is Claire and Kathy go through and curate questions that are left over and we turn those into this AI Answers podcast series. So as I mentioned, Google Cloud is our presenting sponsor for this. We're grateful for Google Cloud and our partnership with them. We have an amazing relationship with the marketing team over at Google Cloud doing a ton of interesting things together. In addition to this series, they are also our presenting partner for the Intro AI and five Essential Steps to Scaling AI classes. We are teaming up on a series of AI blueprints that'll start coming out this fall and then our Marketing AI Industry Council so you can learn more about google cloud@cloud.google.com and I have mentioned this on numerous recent podcast episodes. And as well as these the Intro to AI classes, I would check out their AI Boost Bytes. So this is a new series of short training videos that they designed to help build AI skills and capabilities in 10 minutes or less. There's a few dozen, I think that are now available. We'll put the link in the show notes. You can go check those out. But again, it's a great quick way to learn from the Google Cloud team. They do an awesome job with those Boost bytes. And then this episode is also brought to us by Mekon 2025. This is Kathy's life right now. Kathy leads all the marketing efforts behind the Macon event. She's done an incredible job. The team's done an incredible job. We are on track. We're actually I told her I'm going to raise the goal, but we are already at goal for number of tickets sold. So we are continuing to push forward and keep raising that goal. But everything's looking great. We're looking at 1500 plus in Cleveland October 14th to the 16th. Dozens of breakout and mainstay sessions. Incredible speakers. There's four optional workshops the day prior on October 14th. So that's workshop day and opening party that night. You can check out the full agenda at Macon AI that is M A I C O N AI and also episode 168. I did a full breakdown of the main stage sessions that were just announced so you can use Pod100 for $100 off your Macon ticket. Still have a Couple weeks to go. We'd love to see you in Cleveland with myself, Mike, Kathy, and the entire SmartRx Marketing AI Institute team. All right, Kathy, I'll turn over to you. If you have anything to add on Macon, go for it. If not, let's roll into some questions.
2:14
So October in Cleveland is glorious. It's also going to our opening parties at Hofbrau House. So Oktoberfest is the opening party for.
5:08
Mekon, and I am not wearing the outfit they bought.
5:18
We are just deciding if Paul's gonna wear lederhosen.
5:22
I am. I. They literally put this in their internal chat last week or beginning of this week that, like, the outfits have been secured, and my response was, I am not wearing that. Whatever you bought, do not think I am showing up in it.
5:24
Come on.
5:38
You're gonna need to, like, use some nano banana to, like, put me into that stuff. You're not gonna see that in real life.
5:40
I mean, say less. We'll be working on that. Claire and I can work on that.
5:46
Some.
5:48
Some promotions today.
5:49
I just unfortunately gave an idea of marketing too.
5:50
Okay, let's jump into this. So this is from our September 24th scaling AI class. And scaling AI, the questions are just kind of more strategic, a little deeper, a little big picture thinking. And I was going through these last night, and I was like, dang, these are really good questions. So if you haven't joined us for five essential steps to scaling AI, our next one is November 14th. We'll include a link in the show notes so you can register for that. And I encourage you, if you haven't come in a while, come back.
5:55
Yeah. Just a reminder here, like, the format for these is during the lives. I don't know the questions that are coming. And so we actually follow the same format here. Like, Kathy and Claire curate these. I have not looked at these in advance, so if there's some I. I can't answer really well, I. I will be honest and say, like, yeah, I don't know, like, here's some resources. Maybe.
6:22
I tried to not include those.
6:39
Okay, good.
6:41
Or position them in a way that I knew you could.
6:42
Okay.
6:44
All right. We'll do our best to cure you.
6:45
Yeah.
6:46
Okay, let's get started. Number one, how have you seen AI get introduced to a financial services firm? As they are highly regulated, I would.
6:47
Say so this applies to any highly regulated industry. You have to work closely with it and legal and procurement. Like, you have to know the barriers ahead of time. Then what I always advise people to do in these instances is understand the risks, understand the concerns of IT and legal procurement and then steer into those. So find use cases where those risks become low or non existent. So if they're worried about say for example, customer data getting leaked into models and you know, ends up in training runs of future models, things like that, or privacy or, you know, overall regulations for an industry, find use cases where that doesn't apply. And so a great example, you know, I think that's just super tangible is a podcast. So let's say you have a podcast. There's like, I don't know, we have like 15 use cases every week for AI in the podcast. That is all publicly available information. There's nothing we're doing or saying on the podcast that would come into any concerns around these risks that would prevent us from using AI. It's just a publicly available transcript. So we find ways to infuse AI into workflows and campaigns where the risks aren't there. And then you can spend more time trying to solve for the bigger picture and how to accommodate, you know, the regulations and the risks and the concerns that might come with the bigger uses, but don't allow that to prevent you from moving forward with low to no risk use cases. And again, this is regardless, financial services healthcare is another great example. Government, like there's all education. There are so many endless use cases. And I would say like we have Jobs GPT, we can put the link in, but it's just SmartRx AI and then you click on Tools. Jobs GPT is a great one. You can go in there and say, hey, I'm a financial advisor or I work at a bank and here's my role. How can I use AI in a low risk way? Just talk to the AI assistants about these things and it'll help you find use cases that are going to be safe for you to use and even help you build the business case and justification for it. So you can convince the people that you need to convince that it is a safe use of AI?
6:55
Absolutely. Okay, number two, what guidance would you give leaders who want to use AI not just to optimize today's operations, but to fundamentally reimagine business models and customer engagement for the next decade?
9:08
I really like this one. So my workshop, of the four workshops that we're doing at Macon, mine is on AI and innovation. So I think that so many companies right now are still so focused on the efficiency and productivity side. It's the obvious thing that AI enables, but the organizations that reimagine what's Possible that think about growth and innovation, finding new markets, new product ideas, new ways to engage with customers, as, you know, as the listener's asking here. And again, I think that the underutilized part of these AI assistants today is the reasoning capabilities, the ability to do deeper thinking about stuff like this. These are the exact kind of questions I would go have with a GPT5, the thinking version of it, Gemini 2.5 Pro. I think soon we'll have a Gemini 3, like probably in October, we're probably going to get the latest model. I would use those reasoning models. So again, if you're not familiar, a traditional chat model was, you know, what we got with ChatGPT in the early days, it was a prompt in and an instant response. So basically like information retrieval and prediction, it would just kind of respond to you right away. With reasoning models, it takes its time to think. It goes through a chain of thought, it builds a plan, and it, it more deeply considers the actual intent of your question or your prompt. And so I would go in and say, like, hey, I have this kind of business. Here's the challenges we're facing. I want to think of innovative new ways to use AI to help us grow this business and differentiate from customers. How can I do it? Like, just talk to the AI about these things. And what I often say to people is, imagine you have a highly accomplished consultant, business consultant sitting right there. How would you phrase the question to them? If you could have an expert in your industry that could say, you know, you could say, how could I reimagine this? What can I do different? Talk to the AI like that, and you will probably get all kinds of inspiration to help you reimagine what's possible in your business.
9:20
Yeah. When it comes to the customer engagement side of things, I would say, you know, what does helping you reimagine business models and using AI in certain places allow you to do from a customer engagement standpoint? What doors does that open? What time does that give you back to be focused more on the customer engagement side of things. And I'm excited to see where AI can help us more with customer engagement in the future. Because right now the way that we're using it is, I mean, some ways on our website and through our chat and things like that. But also, and that also, it's just like, okay, I have this time back, now I can go engage. But are there going to be opportunities coming up that could even enhance that even more?
11:22
Yeah. I mean, even with, you know, we were just yesterday as a team Looking at a new capability in the new learning management system that we're going to be rolling out for AI Academy here soon. And it has a more intelligent chatbot built into it that's like trained on the courses and the content. And so if you think about previously, you would have had to have reached out to customer support and said, hey, I need some help building a learning journey. I'm not sure what to take next. And now it looks like we're going to be able to have that kind of capability right in the system. And to Kathy's point, like the way we always think about strategy and business is what's more intelligent, what's more human. So it was a tagline I created for our conference back in 2019, but more intelligent, more human human. And we think about everything through that lens. So if we use, if the chatbot within the learning management system becomes more intelligent and actually enables real time interaction 24, 7, where you know, our learners can interact at any moment, then it frees up the humans to do more personalized outreach to connect with our learners, to spend more time in person with them, things like that. And so we're always trying to find that balance. And I think that's the kind of stuff that enables. And if you think about the future of your business and the strategy and the reimagining what's possible, think about the more intelligent more human lens. And every time you do something that's more intelligent, it's going to free you up to do the more human stuff. And that's like, Kathy does a really good job of this with our marketing. And on the customer success side, she spends a lot of time with one to one human connection. And that's made possible because we automate a lot of the low impact stuff.
11:54
But it does make me nervous. Like if someone I know really well comes on our website and they get a chatbot who doesn't know the nuance and the history of that person, are they going to be like, what the heck, I've been talking to you for four years and I get this, like, I'm a little nervous with some of those things, but I think we also just need to try it and see and just kind of, you know, enhance it from there and figure out how to make it better?
13:29
Well, I think you need to make the off ramp to a human very easy.
13:52
Correct.
13:56
Like if you want a human. But I do think that there's, you know, HubSpot's had a lot of data recently on this because they're doing a Great job of reimagining customer success and customer service. And it's like, I don't know, something like 80 to 90% of inquiries or requests are easily resolved through a chatbot. And then there's going to be those instances where someone still wants to talk to a human. I mean, I have that. You probably have that too, Kathy. Like, a lot of times, like, I don't know, I think recently about, like, AT&T in my personal life, their chatbot's gotten pretty good. Like, I remember back when I started my first business in 2005, I was like, if I never have to talk to AT T again, like, my life will be complete. Like, it was just a horrific experience as a business owner to have to deal with at T in 2005. And now, honestly, like, it's pretty smooth. Like, I can just go in, I can talk to the chatbot. I get most of what I need. And if I need a human, you just click a button, you bounce over. Tesla, I've had a very similar experience. Like, they incredible customer service through Tesla's app. Like, so I think it can be done really well where it's low friction. I think at the end of the day, that's what we're trying to solve for is like, what is the low friction way for people to get what they need? And then how do they get the more human side when they want it?
13:57
Correct. Okay, I'm going to move on so I could talk about this all day. Number three, you've outlined five essential steps for scaling AI in organizations. But what happens when a company takes a more advanced route? Say they've actually built their own AI in house with multiple models, and one person is responsible for figuring out how the entire organization should adopt it. Do those five principles change in that situation? And if so, how?
15:04
Yeah, I think so. Basically, the way I explain the five steps we go through, which is, you know, building internal academy, creating an AI council, responsible AI principles, generative AI policies is sort of the third step. AI impact assessments, where you're looking at not only the current impact, but, you know, the next 12 to 18 months, how it changes, you know, your talent, your tech structure, your strategies, all those things. And then building an AI roadmap. What we always say is different organizations are going to be at different stages of transformation. If. If you've already done all of those things, awesome. Then you're optimizing the transformation process from there on out. You're constantly doing AI impact assessments. That's not a. You do it and now you're done and what's next? Impact assessments are an ongoing thing. As new models come out, as new capabilities come out, you have to go back and do another impact assessment. Okay, what does this mean to our talent?
15:27
If.
16:16
If agents like we just had Claude Sonnet 4.5 came out yesterday, so we're recording this on September 30th, by the way, just for context of what's happening, came out yesterday, they claim it can do up to like 30 hours of coding on its own. They're basically claiming it can build a slack like platform on its own over 30 hours. That changes things. If that's true, you now have to step back and say, okay, like, if we are in a business of, say, building software, or if we weren't in the business of building software, can we be now, like, can we build our own software? So impact assessments are an ongoing thing. Generative AI policies are an ongoing thing. As agents become more capable, you have to revisit what are our policies around agent use. So I don't know that those five steps become obsoleted once you've done them. You're always going through them, and that's why I even talk about them. They're not sequential. Like, you're often doing all five things simultaneously and then you're constantly improving them. The roadmap keeps changing as things become possible. So, yeah, I don't know. I mean, great. If you're further along, that's awesome. And you may be doing whole new things. Like in the Scaling AI course series through Academy, I had a whole module on building an AGI Horizons team. Like, maybe you're spending your time thinking, well, what happens when we get to artificial general intelligence? What does our company look like then? And maybe that's more of your, you know, brain cycles are going to. What happens in 2027 when maybe these things are as good or better than all of our human workers? Like, now what do we do? So there's no. And there's no limit of like, stuff that needs to be solved for. I think if you've solved for all this, the first phases, great, you keep iterating on them and you start moving on to the next stuff.
16:16
And I do think, though, there is value in, okay, you're advanced, but your team isn't. Take some courses together. So you are learning together. So you are also reframing what you've been building, going through something with that has like a plan attached to it. And so you know what your team is learning along the way. So you can kind of just do.
18:07
This all together and that. Yeah. And the whole. It's a great point. Like the change management side of things. This, like, I mean, if there's an organization that has solved all of this and figured out the change management side and has like upskilled all their people and they're ready to go, like, awesome. And I would love to hear you and we'd love to do an AI transformation story on you on the podcast. Like, it is hard to find those companies. The vast majority of companies are kind of like well into the pilot phase, starting the scaling phase sometimes, but really have not fully thought about the change management side they're trying to deal with. What does it mean to our people over the next year? Is it going to affect our staffing levels? How are we changing our recruiting process? Like, they're just starting to ask the right questions and the vast majority of them have no idea what to do with agents and don't even know what reasoning models are. Like, that's the reality of most companies. So if you're in the boat where you're past all that, amazing, you are probably in the top 1% of like, companies in the world right now.
18:26
Number four, we hear a lot of executives say that AI is a top priority, but in practice it often ends up as a side project handled by a few people on the margins. From both an organizational and economic standpoint, how do you actually convince leadership to commit the resources and build true AI enablement across the business?
19:28
Yeah, it's a really good question. I mean, it all comes down to education and training, like awareness and understanding. So, you know, every year we do this state of marketing AI report. We specifically are asking like, marketers, but there's a lot of business leaders in there as well. And then I always think about marketers are usually kind of the leading edge of this within companies. So it's a pretty good data set to look at. And what we find year over year is lack of education and training is the number one barrier and followed very closely by lack of awareness and understanding. So what I have found is if you're in an organization where they just don't fully buy in yet, maybe they're allowing it to just solve this as a technology problem and they're not thinking holistically about reimagining business models and doing impact assessments on teams and tech stacks and stuff like that. You have to find the lever to pull to get them there. And I don't know if that's like, you got it, you know, your organization better than us, obviously is it the CEO that needs to be convinced is there, say, a CIO who just doesn't get it, and they're thinking only from a technology perspective. You have to know who will actually move the needle. Who do you need to get bought in at the C suite level that this becomes a much higher priority and is thought of more holistically than just a technology problem. This is a true transformational. It's an operating system for the business. And for some people, it might be show them what the competitors are doing. For some, it might be show the latest research on the impact of AI agents. Like, you have to know what it is that actually drives change in an organization. Sometimes what we've seen work really well is you just get a team. It could be the marketing team, the customer success team, the ops team. Have them do something that has a true business impact and then take that use case or case study in like three to five slides and say, hey, here's what we were doing before. We were spending $200 a month on this thing. We integrated this tool, and we're now spending 50amonth instead of 200. We've taken the other 150 and we launched these two new campaigns which generated a million dollars in revenue. Boom. Okay, let's do this 10 more times across the. So make it as simple as possible for them to say yes and to buy into the bigger picture.
19:45
One example is yesterday we were looking at what Noah's been doing with customer success and bringing things in through HubSpot versus emails. And that quick little change, like, if you didn't know that was going on, you would have been like, oh, my gosh, why haven't we been doing this forever? And now what else can we be doing with this?
21:59
Yep, it's. Yeah. And I will say, like, the other thing is, you know, again, you have to get the full buy in. But sharing successes internally, even if you just have a bunch of pilots going on across departments and it's pretty spread out, it's not like unified under one initiative. Then get those people together and share. Build those internal champions from sort of the bottom up. And people want to be a part of things that are working. Like they're going to be inspired to say, okay, let me see what I can do in the sales side, or, you know, on the product side.
22:16
Well, that was the response. Tracy was like, okay, now do this.
22:44
Yeah, yeah, for sure.
22:46
Okay, number five, there's a lot of talk right now about generative AI and AI agents. Are these really two different things? And if a Company isn't actively using AI agents yet, do they still need to consider policies and guardrails around them?
22:48
Yeah. So I mean generative AI is the ability for AI to create things, so text, images, video code. AI agents is the ability for AI system to take actions on your behalf. So imagine it being able to go through a ten step process. The most tangible example for people, if you haven't tried it yet, is in ChatGPT or Google Gemini. Go in and run a deep research project. Deep research, it's powered by AI agents. Basically it'll go and build a plan. So you say I want to research my 10 competitors. Here's a list of the competitors. Go out and look at their pricing models, product updates, any current marketing, messaging. Just give it a, give it a project like that. If you don't know what to do, go into Chat GPT and say I want to test deep research. Here's what my company does, here's what I do. Like write me a prompt so I can test the capabilities of deep research. But when you do it, what happens is it then builds a plan and then it takes that plan and execute it. So it goes out to websites, it reads them, it processes it, it synthesizes the information, it goes through a chain of thought and kind of, and then it's, and then it creates a research report for you. So that's an AI agent that is doing something, it's taking actions, it's building a plan. So that's the main difference. Now the agents have varying degrees of autonomy. What I mean by that is sometimes you have to say, hey, here's the 10 steps. Just execute these 10 steps. That's you in a deterministic way saying like, okay, this is what I want you to do. Now you're going to go do it. That's very little autonomy in that case. The other is hey, I just want this achieved. I don't know how you're going to do it but like you go figure it out. And now it's like further degrees of autonomy where the human's less and less involved in the process. So they are very different things. I will say AI agents are going to just be built into the software you use. So right now you have to kind of go find them. You know, whether it's through Microsoft Copilot or Salesforce or HubSpot or whatever. Like you're kind of more involved in the process of identifying the need for an agent, giving it parameters, guardrails, building it. What's going to happen though is it's just going to be embedded within. So in a good example of this would be in ChatGPT right now. I don't know if this is at all tiers, pricing tiers, but they have an agent mode. So if you just go into chat GPT and you click on I think it's tools, you'll see agent mode will pop up. That's building an agent right into your experience. I think I gave the example on the podcast about this. Like, I, I used agent mode to look for a new front door for my house. I just took a picture of the door and I was like, can you go find doors like this? I, I need a new door. I'm not explaining what the style is. And it, it figured out what the style was and it went and like looked around and, and then it brought it back to me. It's like, okay, here you go, here's some place. So, yeah, that's, that's kind of the distinction. I do think it's important to start building in, into the generic policies though, because people are going to have access to these things whether, you know, you go seek them out or they just are embedded into the tools you're using.
23:02
I just did that with. We had to take a tree down. So my husband and I were like looking like, what could we do back here? And I took a picture and I was like, what could we do back here? It was pretty cool.
25:59
It is awesome.
26:10
And then I said, it said pickleball court. He's like, no, it didn't.
26:10
It's. That's getting personalized to you now. Now it knows your interests.
26:15
Like, it didn't say that. Okay, number six, for independence or loosely connected teams, is it even possible or advisable to share a single enterprise AI account? What should we know before going that route?
26:19
Independents are loosely connected teams.
26:32
So I think like could like, for.
26:34
20 bucks a month, everybody just shares the same account or just like a.
26:36
Team of like an like agency owners that are within the same, you know, cohort or something. Could they jump in and share an account or is it by email address?
26:39
Yeah, I mean, it's generally, I've heard of this at a college level of like students sharing a single account because they don't, you know, they're college kids. 20 bucks a month is hard to do it on their own. So I've heard of those instances. I don't know. I mean, I would just make the business case to everybody to have their own license and to set it up under a team or enterprise account. They're making it easier to collaborate. Like ChatGPT in particular, they just announced that you can now share projects. So you used to be able to share GPTs. Like you could build custom GPTs and share them with people on your team. They now let you share entire project folders. That. That's new as of like three days ago, I think. So they're making it easier to collaborate. Google still has some ways to go on making Gemini more functional like this. As of right now, you still can't share gems when you build them, which is absurd, but you can't. So hopefully Google fixes that this fall. But I don't know, I mean, I would, I would definitely encourage people to get individual licenses. You definitely can have like to share a team account in. I'm trying to think in chat GPT you don't need the same domain, like email domain. Yeah. So like, if you had a group of people that were working together, like say, you know, five independent contractors and you guys collaborate on stuff, you could in theory create a single account that you're paying $20 per user per month for and then have a joint access to that with different email addresses. So that. That's how it historically worked. I don't know that they've made any changes to that where you have to verify your email address. Like Asana, for example, we use Asana. And that one, like you have to have the same email address to be part of the enterprise and then you can invite outside guests. That's not how ChatGPT historically has worked.
26:48
So are there any things. Let's say it's five independent contractors. Is there anything that they should think about if they were going to go and do a team one together as far as privacy for each other?
28:35
Well, I think you'd have to just be familiar with how it works and how the privacy stuff is set up and how the terms are set up. The one thing I would consider there is right now the memory of these things and the personalization happens at an individual level. So let's say we have 15 people on a team license for ChatGPT currently. To my knowledge, it doesn't learn across the system and then carry that into one unified. I'm going to like humanize this one brain where like everything that me, Kathy, Mike, Tracy have done is remembered at a centralized hub. It's. It's personalized to each individual user within that team license. So it starts to learn me. But whatever it learns about me and the company doesn't transfer to Kathy's experience in ChatGPT I would imagine that changes and it starts to centralize the knowledge of all the individual users into the memory of the one overarching account, because it can learn a lot of context from all the ways that the individuals use it. But that. I haven't heard that as part of the roadmap, but it just seems like an inevitable thing that would happen.
28:45
And if that does happen, we'll talk about it on the podcast. Yeah, you and Mike can break that down.
29:54
Yeah.
29:58
Number seven, if a company doesn't have an AI council, but leadership wants a vision for each department, say inventory management or hr, where can someone start learning what AI can realistically do for each function?
29:59
Yeah, I mean, literally I would, I would just go into ChatGPT or Gemini and have those conversations.
30:11
So Jobs GPT.
30:16
Yeah, Jobs GPT is a really, you know, quick way to do it. So again, Jobs GPT we've mentioned a couple times, but in essence what it does is it's, it's trained on. It's a Chat GPT custom GPT. So it's, it has the capabilities of Chat GPT, but I basically trained it on an exposure key and a exposure key that says, okay, as these models get smarter, these are the known things they're going to improve at. How will that change certain jobs? So you can just put a job title in or job description, or you can ask for kind of examples of where these jobs may go in the future. But in essence, you can just go in and say, okay, you know, we don't have any. I counsel it goes back to saying earlier, just talk to it like a consultant. We don't have an AI council, so no one's guiding us on this. I work in inventory management. There's a three of us who are really excited about AI, but we're not sure what to do. We have ChatGPT licenses. How can we get started? Help us find maybe three to five use cases that are the most immediate value we could create. What should we do? So, I mean, literally, like getting value out of AI right now is comes down to asking good questions. Like, if you know the questions to ask of a chatbot, you can get a tremendous amount of value with $20 a month. But I think most people just aren't, aren't there. It's almost like the old adage when you know, someone say, well, you know, ask you a question like, well, did you Google it? Like, that's kind of how I approach stuff now, and not in a condescending way at all, but like when someone comes up to me when I'm doing a speaking engagement and they asked me a question like this, like, oh, I'm an HR leader. I'm just not sure where to start. I know it's being used. And I'll say, like, well, have you talked to ChatGPT about it? Like, have you? You ask the question you're asking me directly to ChatGPT and give it the context. Like, provide the background that you're providing to me, and it's going to give you a really good place to start. And then you're the domain expert so you can figure out which pieces of its response are valuable. But it's often just, like, the best place to get going.
30:17
So last Friday, we were at the Ohio Aerospace Institute doing a day with middle school girls, and we were showing them jobs GPT. So they were coming up and saying, I'm interested in fashion design, I'm interested in math, or something like that. And someone came up and she said, I want cosmetology. I was like, okay, this will be fun. So I throw it in there. And it was like, obviously can't cut someone's hair. I mean, who knows? Maybe someday it can. But all these different things. And I said, what do you like doing? And she's like, I love color. And I was like, wonder if that could help you with that. And she was just like, her. Her mind was just blown.
32:15
Yeah, that's cool.
32:47
Just, like, learning about it and not, like, doing it. She's like, I like this part of it, but, like, there's all this whole science thing that I might not really know about. And I don't want to do the business side, but Kanea helped me with some of that because I really want to focus on being with the people. And she was just. I don't know. It was such an interesting. Every single one of these young girls were just blown away by, like, the opportunity of what was out there.
32:48
Yeah, it was pretty cool. Yeah, it's cool. I'm so glad that the team did that. Yeah, it was a great event.
33:08
Number eight, what are your best practices for training newer AI users, especially around managing expectations, getting quality outputs, and staying safe? I've noticed firsthand how much difference proper training makes in adoption.
33:15
So the way we approach this, and this is whether we're running workshops for corporations or we're just providing guidance, is you have to personalize the first few use cases. So the greatest way to get immediate value and get people bought in and seeing the full potential. If you're going to provide CoPilot licenses or ChatGPT licenses or Gemini or whatever you're providing to them is either run an interactive workshop where you help them find those first three to five use cases, or have the people leading the change management within your organizations build GPTs for them. So you know, if you're going to introduce it into sales, for example, build a GPT that helps them do something that every person on that sales team has to do every day. And it's like, oh, this is going to save me three hours a week. Like, this is fantastic. Or give them a series of prompts like as a sales professional or customer success professional. Here are five prompts you may find extremely valuable to help you right away with efficiency and productivity. So we're just huge proponents of. You have to personalize the integration of this stuff so that it's so obvious right away. And do this at the C suite level too. Like, I'm a huge believer. If your CEO isn't fully bought in, build those first couple prompts for them. Build a GPT for them that actually helps them right away and it changes everything. Once they see it for themselves, how it can help them and the people who are resistant because they have some preconceived idea about AI and maybe they think it's abstract or they're just fearful for their job and like they don't really get it. As soon as you personalize something for them, the light bulb goes off for everybody.
33:27
Yeah, that kind of goes into this Next question number nine, how do you drive stronger engagement in AI enablement trainings when individual contributors already feel too busy with their day to day work to spend time learning AI?
35:03
Yeah, find out what it is that's keeping them too busy and then solve it for them with a prompt or a custom GPT. Say, hey, here I know you're spending five hours a week doing this thing. I built something that'll get it down to like 30 minutes for you and when you're ready, I can build three more of them for other things you're doing. So yeah, again, it's just the personalization. Make it easy for people to say yes and like to, you know, we always talk about. I always end my course with, you know, be curious, explore AI. It's this whole idea of like drive that curiosity by showing them something, you know, make them curious to learn more and find the next use case that is helpful to them. And sometimes it can be like in their personal life, you know, show them how to use it to help their kids with homework, stuff like that like study mode, like guided learning. I've been talking a lot about that on the podcast, a couple, you know, for. For ChatGPT and Google Gemini. It's fantastic for helping students. And. And once you see that, it's like, oh, I could probably do this at work. Like, I could take the same approach. So.
35:15
Absolutely. Number 10, what is the best way to handle a situation where AI got something wrong?
36:14
Well, assuming you caught it before you published it or sent it got something wrong, that's a different story. But I mean, they get stuff wrong. They. Hallucination is like the technical term for what happens. They will make stuff up. They will, you know, use data that didn't exist. They will cite a source that wasn't there. They will misspell a name sometimes, like they're going to make mistakes. And that's why you have to have the human in the loop. This is the whole idea of, like the AI verification gap. Like, the human has to verify these outputs and the. And the higher risk, higher profile the output, the more important it becomes that a human is in the loop. So we've probably all heard the stories of lawyers using these to create legal briefs that they submit to judges and the judge finds that there was something wrong in there. Now you got a major problem. Like, the humans have to be in the loop. And if it has to do with like business analytics data, customer data, if it's a communication that's going out, like, you absolutely have to have the human heavily in the loop in those processes. And at the end of the day, like in our responsible AI principles that we have published and that we share with people, we say, like, the human owns the output. Just because the AI is capable of doing these things doesn't remove the agency from the human, the responsibility to own the output, the accuracy and the quality of that output. So I think that that's just something that needs to be taught to people, that you still are in charge of making sure the output is correct. It's not work slop. As we talked about on the podcast this past week. What was episode 171? I think we were talking about Workslop 170, actually.
36:24
170?
38:01
Yeah. So, yeah, you don't wanna be handing stuff in that you didn't put the actual time into.
38:01
So what if it got something wrong about your company? Is there anything you can do to mitigate that for the future? Can you retrain the model to have the facts correct?
38:08
Yeah, so, I mean, this happens to me. I've seen this numerous times. I've even had it with GPT5. So even the most recent models, you know, I'll ask it something I actually had. There was one, I think it was yesterday. Oh, this is a funny one for like personal stuff. So my family and I, so I have a 13 year old and a 12 year old and my wife, we have Mario Kart World races every night. And so I always lose, I'm always in fourth place. And so I was having a conversation with Google Gemini about what is the best character car mat like match so I can actually start winning. And it came back and it started recommending stuff for like Mario Kart 8. And I said no, I said Mario Kart World, like you're, you're, you're not giving me the right information. You don't choose the wheels anymore. That was in Mario Kart 8. And he goes, oh, I'm sorry, you're right. Mario Kart and actually went and searched the web and updated its knowledge base to be current. And, and so, you know, it's a personal example, but that happens in business all the time where I'll be like, no, no, you're wrong. You can do the thing I'm asking you to do. Like it may say, oh, I can't generate images. Like, yes you can. So and then like I think sometimes that then lives in its memory and then it remembers that change. So yeah, I don't know if you're using just a standard chat, you can tell it like, hey, remember this for next time and it'll, you know, remember that. The other thing is if you're using GPTs or gems, you can upload updated information into its knowledge base. You can say, hey, anytime I'm asking you about this stuff, refer to the knowledge base PDF when I ask you this and then it'll, you know, reference that anytime it's doing an output.
38:17
And can you train that like for other people? Can keep that in the knowledge base for others?
39:49
Yeah. So this, this again goes into the, the team and enterprise accounts. Whether they function as a single hub of knowledge or not. I don't know. That's not, I don't know that you can upload a knowledge base into the team account that universally is, is referred to for all users.
39:54
Unless there was like a GPT that was built and that was a knowledge base.
40:15
Correct. That's like, that's how we do it internally. So like I have our co. The co CEO GPT that I built has a knowledge base that's trained on company data. The system prompt has revenue goals, it has all this stuff and then I shared that co CEO internally with the team. So when they use it, it has that same knowledge base about our information. So that's a way to control it now.
40:17
Yep. Number 11 for interns and early career professionals entering the workforce now, which specific skills or habits are most critical to develop in order to stay ahead of the AI curve and actively shape the future rather than simply reacting to it?
40:39
That's a great question, especially given all the stuff we've been talking about on the podcast lately about the challenges for early career professionals and you know, the unemployment underemployment rates are, you know, not great right now for people in the 22 to 25 range. So I don't know, I think these are kind of softer skills. But curiosity, imagination, critical thinking are just fundamental. The ability to work well with the AI, to ask smart questions, to, you know, not just do a single prompt, but have follow up prompts and, you know, really work with them. Those intrinsic motivation, like, those are the kinds of things that I just think have always been important and they remain really critical at that early stage. I don't know that there's any. I mean, I can tell you the advice I've been giving to family and friends who are in college. Come out with any business major you want. I don't care what it is. Economics, accounting, finance, whatever. If you want to be in the business realm, any of them are fine. Like, you just need to solve problems. You need how to learn to work through hard things. Take entrepreneurship classes if you can, because I think entrepreneurship is going to be like the golden age here. It's going to be way easier to create businesses and then take as many complimentary AI courses as possible. Like, you basically want to come out with a liberal arts background where you have a diversity of skills and knowledge and then you know how to work with AI to augment your capabilities, not replace the need for you to do the work. So like challenge yourself to do the hard work. Go through these hard things. My son's interested in coding. It's like great. If you want to do computer science coding, do it. I don't, I don't care that Agent 3 can code or Sonnet 4.5 can do the coding in eight years. Like is, is he going to be actually writing code? I don't know. But for eight years he's going to learn how to do really hard, repetitive things that transfers into anything. So yeah, it's hard to say. Like, you know, how you should think about education and changing majors and Stuff and my general guidance is just do these, these things well and then work on the human skills. Like I was going to say communication.
40:55
And knowing coding is amazing and knowing that, how to do all that. But take a communications class.
43:07
Yes. Learn how to talk to people. Do, you know, take a class. You have to do presentations and yeah, debates. Like all of it's good. Like I, I am a believer though. Like I think liberal arts degrees will continually take on greater value and, and I would, I would really challenge parents and I would challenge students, get diversity of it. Take a psychology class, a sociology class, you know, really spread it out and learn a lot of different backgrounds because I think people that are well rounded in that way are just going to be able to interact with AIs way better and consider the more human aspect of this as AI starts to really be everywhere within society.
43:14
Yeah. Talk a little bit more about the entrepreneur side of things because you had mentioned previously that opens up a whole world of opportunity from a job standpoint of if there's layoffs across enterprises and things like that, small businesses could be thriving soon.
43:55
It, it, I mean, starting a business and, and running a business is very hard. And you know, I always just go back to when I first started my agency in 2005 and I had no idea what I was doing. I was 27. I came out of, you know, liberal arts college, journalism school. I took, I had a business minor. So I mean, I'd taken business class but I had no idea how to start a business. I didn't know how to, you know, manage it. So I went to a local organization in Cleveland called Score and they matched me up with a retired executive from a manufacturing company. And you know, wonderful the work he was doing, he was volunteering his time to do this zero context into what I was trying to do. Told me the idea was terrible. Like, and I'm like, okay, where do I go now? Like, that was it. That was my one shot. I found this organization in town that was supposed to help me as a young entrepreneur. Now I don't know what to do. If I had had ChatGPT in 2005, I could have figured out in 48 hours what it probably took me four years to learn. Like, the learning curve is almost non existent. It's like, hey, I want to build a professional service firm. I want to do the pricing model different than it's traditionally done because I think our charging by the hour is absurd and obsoleted. How should I do it? Like, what should I think about? How should I build the financial model for the business. I would have just spent 48 hours asking all the questions that I had back in 2005 when I had no one to ask those questions to. I could ask accounting questions, finance questions, legal questions, business strategy questions. Like, you have on demand intelligence in many ways at a PhD level, like, at an expert level to talk to about anything. And now if you have any semblance of, like, coding ability, you can code apps you can build. Like, I was talking to my son last night about, he's in seventh grade and they do an entrepreneurship challenge. We have to build a business. And we were talking, I was like, hey, but like, we could build an app. Like, you can actually code an app in seventh grade. He's taking some coding classes. I said, you could build a working prototype of an app through, through these tools. Like, and so it's just the walls have come down to create things, to bring them to market. Now there has to be a market of people to buy what you're going to create. But I do think that entrepreneurship is just going to be like, there's going to be no excuse to not be able to start a business if that's what you want to do.
44:10
Yes, because you could do all that stuff in 48 hours, but you need to have grit, tenacity, curiosity, maybe a little crazy to get to be able to do that and do it well.
46:31
Yeah. And, yeah, it takes a while to make money as an entrepreneur. So, like, I mean, I was paying myself the same thing for, like, the first eight years of my agency, like I was. So, yeah, it doesn't solve the fact that you still have to figure out how to make the money. But the actual building of the business and moving through that learning curve is so dramatically accelerated from when I started my first business back in the day.
46:41
Okay, number 12, if companies can't always claim ownership of AI generated video or images, how should marketers think about the risks? Not just legally, but in terms of brand trust and reputation?
47:08
So the, the copyright stuff is just going to get so fascinating. We'll talk about on the podcast next week, but it appears Sora 2, which is the forthcoming video model, which honestly may come out by the time you Hear this podcast. OpenAI's approach is to. To steal everything, allow users to create anything. So Disney characters, as an example, if you want to create Disney characters in Sword 2, you'll be able to. They're going to basically say, if you want to stop it from happening, Disney, give us a call. So they're basically going to Say we don't care about copyrights, we are going to allow our users to create whatever they want and then you can go sue the user if you want for creating the output. But we're going to enable it to happen. So I think we are about to go through a really, really weird phase of copyright law where the AI labs are brazenly going to say we don't care and they're going to just challenge everybody to stop them. They will have the support of the current U.S. administration, the, you know, the Trump administration will support their lack of caring about copyright. And so that's going to change a lot. I think in the near term though, this does come more down to the moral compass, I would say, like in our responsibility of principles who say when, when legal precedent lags behind, which it always will. With AI, your brand and your company have to have a moral compass. And that moral compass has to play a role in deciding how you will use this technology because they're going to enable you to do anything with it. You have to decide what you're going to allow your people to do with it. And then from a copyright perspective, if you want to own the copyright, currently under US law, AI can't create it. So like if, if your logo is 100% AI generated, you have zero protection against it. So if you create a logo and people know it was AI generated, they can take it and put it on a hat and start selling it online and you can't do anything about it and they're going to force you to prove it wasn't AI generated, basically. So I don't know. I mean, again, I think this is a really important discussion and a conversation you need to be having internally with legal needs to be built in your generative AI policies. It has to be factored into your responsibly princip because there's not going to be really clear answers to this one for a while.
47:20
Yeah, number 13. Relative to all of the expectations around AI, where have you seen it fall the shortest in practice? Are there particular tasks or use cases that consistently expose its limitations?
49:47
Honestly, I see people's understanding of AI and the change management commitment being far bigger of an issue than the limitations of the AI itself. And the reason I say that is again, if I go back to the reasoning models and deep research as a very tangible example of this within ChatGPT and Gemini, I constantly poll anytime I do public speaking and I've done this in rooms with thousand plus people. I've done it rooms with 300 executives from individual brands. And I say who has done a deep research project in ChatGPT or Gemini? It is consistently less than 5% of the room. And this is as of, you know, 30 days ago, I did this again. If, if you haven't done that, you have no idea the capabilities of AI today. It is so far beyond just standard chats and give a prompt and get an output. And yet we've had reasoning models for, for 12 months now. They, they came out September of last year, was the first 01 reasoning model from OpenAI. And yet most organizations, the vast majority of organizations have no idea the capability even exists. And so it is not the limitations of AI that is the problem, it is the limitations of our understanding of what it's capable of doing that is often the problem. And then the operationalization of that understanding. So once you know what deep research can do, how do you actually operationalize that into your organization? That by far is the bigger issue than the AI itself, hallucinating sometimes and things like that.
50:01
So if you're coming to Macon, I meet Katie Robert from Trust Insights. This is one of her passion projects and what she loves talking about this and go up to her, ask her if you can see a picture of her dogs and you will grab her attention. You can ask her all your questions about change management. She's so good at it.
51:32
Yeah. And it is like it's a cheat code right now for competitive advantage. The companies that actually take a change management approach to this and just benefit from the tech that's sitting there to be used for 20 bucks a month per user like it's, it's crazy. The opportunity that's still in front of us.
51:46
Number 14. A lot of people are learning how to prompt AI more effectively, but how do you also train and guide it to be used ethically in the workplace? What practices or guardrails should organizations put in place so that prompting aligns with their values and policies?
52:06
This definitely goes back to the couple questions ago about the moral compass. You know, I do think the responsibility principles are essential and not just documenting them but actually teaching them and ensuring that people follow them from day one, that it is part of the onboarding process that they're taught these things. Because then when people run into these gray areas of oh, I can now generate any copyright trademarked thing I want to do with these tools. Awesome. I'm going to start using them for social and I'm going to start putting, you know, gifts into my emails that are using these like well known characters. If you train it Right. And you do a true change management process, that's not even a debate. They're going to know not to do that. But if you don't teach it, then there's no way. So it starts with documenting it, having generated policies, responsibilities, principles, and then teaching it and making sure you live it as an organization. It has to be built into the culture. There's no other way to do it. Like, I mean, you, you can't just document them. I've said I've, I think I've told this story before. I've sat in meetings where we've asked, like, there was this one big brand in particular and there was like 70 ish executives in the room and Mike and I were running a workshop for them and we said, you know, are there gender AI policies? And the CIO said, yeah. And everybody else in the room looked at the CIO like, where, who owns them? Have we told anybody they exist? And so you realize that, yeah, like sometimes organizations do this, but they don't live it. And that's the key. Like anything else, like if you go back to any kind of transformation, any kind of culture initiative, it has to be infused into the organization. It can't just be words on, you know, a screen.
52:20
Is there any value because the Responsible AI principles are under Creative Commons. Could they download that, tweak it however they wanted to, and upload that as part of the knowledge base? Would there be any value to that if there's like a shared company GPT?
54:02
Yeah, definitely. And we do, we can put the link in the show notes. So the responsible AI manifesto I wrote in January 2023, we released under a share like Creative Commons license, which means you're, you're welcome to take it and edit it and do whatever you want with it, but when you reshare it, you just share under the same license, basically. So it's like an open sourcing Responsible principles in essence. So yes, you can take those and tweak it to however you want. And then once you have it, you could in theory, you know, put it into a knowledge base and tell the AI, always reference this. I, I did that with, when I was creating the AI fundamentals, piloting and scaling AI courses, I trained it on stuff like that. So yeah, definitely a step you could take.
54:14
Okay, last question. Number 15. Of the five essential steps to scaling AI, which step is the most challenging for organizations and what do you see leading organizations do differently?
54:55
Yeah, I mean, my instinct immediately on this one is the AI impact assessments, because I don't know Anybody that's doing them. And the reason I say that is because to do the AI impact assessments well, you have to have a vision for where the AI capabilities are going, which means using that AI exposure key I reference. So I teach a whole course on this in our Scaling AI core series on how to do AI impact assessments and walk through different examples of it. But in essence, you have to understand the labs are working on persuasion and personalization and AI agentic capabilities and all these things. And then you have to kind of project out how far along they're going to be in the next 12 to 18 months with those different things. And then from there now you can actually assess jobs and hiring plans and how campaigns can be improved and workflows and which problems you can better solve. So once you do the AI impact assessments, you get a far greater picture. And I really focus on the talent side in particular because we're all trying to think what happens if we become more efficient and we don't need as many people like what are we going to do? How do we retain those people and upskill them? To remain relevant in the organization, you have to be honest with yourself. And the only way to do that is to be proactive about looking at where is the technology going and how is it going to start to affect us. So one real tangible example would be future proofing hires. So if you have a job description for say, you know, in our world, a customer success manager is like something that we're actively looking at hiring. If you have a job description of what that person would do and then you put it into jobs GPT and say, okay, assess this over the next 12 months, like how is this job going to change? You may realize, man, 30% of what this person would do today, they're probably not going to be doing in 12 months like an AI agent's going to do that, what would we do with that 30% capacity? How would we make sure this is still a full time hire? So future proofing hires is one really critical way and you can't do that until you understand what the models are going to be capable of in the next six to 12 months. So it's hard because it's an inexact science and there are very few benchmarks to look at of people actually doing this. So that's.
55:05
Is there value? Is there value in like us putting our jobs on a regular basis through jobs GPT to see what opportunities we're missing that we're not thinking of or just so we can stay one step Ahead.
57:16
Yeah, I mean that's. So there's two main use cases when I built job cpt, one was to prioritize use cases for yourself. So I'm a chief marketing officer. How can I be using AI today? And it'll help you do that. Breaks the job into tasks and then recommends ways you can use it. And then the second piece was kind of looking at the impact as AI becomes more capable. How's it going to change my job as a chief marketing officer? So yeah, I think having regular check ins, it could be part of an AI council, part of their domain of things that they're doing where maybe they're taking the top 10 jobs in an organization and every three to six months running an updated assessment of how's it impacting these roles.
57:27
Yeah, just seeing if that exposure grows.
58:08
Yeah, for sure.
58:10
Yeah.
58:11
All right, Kudos to the listener. These are great questions. Claire I know curates a lot of this stuff and make sure we try to like, you know, address different questions each time. But these are all really good. I'm impressed.
58:13
Yeah. So next scaling AI the essential steps is November 14th. So we are done with intro and scaling before Macon. So we're going to have a little bit of a breather before.
58:25
Well, I can eventually do AI for agency summit.
58:34
AI for agency summit is November 20th, so if you'd like more information, we'll drop that link in the show notes.
58:39
Also, I was totally thinking the other day, it's like, oh, okay, once we get through Macon, because my whole year has been once we get through the AI Academy launch, I can like breathe again.
58:43
Story mode.
58:51
And then we did and it's like, wait a second, Mekon's in three months. Okay, once we get through Mekon, I can take a couple days off and I'll breathe again. And then I can see that meeting on my schedule for the AI for Agency Summit. And I'm like, oh my gosh, what do we do? Oh, can't wait for the holidays. All right, well, thanks Kathy.
58:52
93 days till new Year's.
59:08
I didn't need to know that. All right, well, thanks Kathy for co hosting with me. Thanks Claire for helping us put us all together. And thanks everyone for attending the classes and asking great questions and tuning in for another edition of AI Answers. And thanks to Google Cloud again for partnering with us on this series. And that is all. We'll be back with a regular weekly episode of 172 next Tuesday.
59:12
Thanks everyone.
59:36
Thanks for listening to AI Answers. To keep learning, visit SmarterX AI where you'll find on demand courses, upcoming classes, and practical resources to guide your AI journey. And if you've got a question for a future episode, we'd love to hear it. It that's it for now. Continue exploring and keep asking great questions about AI.
59:38