Embracing Digital Transformation

#334 The Dirty Secret of Public DNI: Your Data is in High Demand

34 min
Mar 17, 2026about 1 month ago
Listen to Episode
Summary

John Byron Handy IV, CEO of ITERNAL, discusses the privacy risks of public generative AI and why enterprises are locking down access to these tools. The episode explores how private AI solutions can deliver competitive advantages through proper training and education while maintaining data security and intellectual property protection.

Insights
  • Only 2-3% of the global population currently uses public generative AI tools, with less than 1% paying for advanced features, indicating massive untapped adoption potential
  • The real competitive advantage in AI deployment is not the technology itself (10%) but infrastructure (20%) and human literacy/training (70%)
  • Open-source AI models have closed the performance gap to 90-98% parity with tier-one commercial models, making private AI viable for most business use cases
  • Data privacy concerns are shifting from individual deepfake risks to enterprise-level intellectual property protection and third-party data exploitation
  • Organizations can achieve AI parity with public solutions through focused training programs and on-premise deployments, even with slightly less advanced models
Trends
Enterprise AI lockdown: Companies restricting public generative AI access to protect intellectual property and trade secretsOpen-source model convergence: Rapid performance improvements in open-source models reducing dependency on proprietary commercial AIAI literacy as competitive moat: Organizations investing in employee training and prompt engineering as primary differentiator over technology choicePrivate/on-premise AI adoption: Growing demand for enterprise-grade, offline AI solutions that maintain data sovereigntyVoice cloning fraud escalation: Real-world exploitation of AI-generated voice technology for financial scams and social engineeringData as liability: Shift in perception of data sharing with AI companies from innovation enabler to security and competitive riskSpecialized AI models over generalized: Focus on narrow, task-specific AI models delivering better ROI than general-purpose solutionsRegulatory and compliance-driven AI strategy: Enterprise AI decisions increasingly driven by privacy, compliance, and data residency requirements
Topics
Private Generative AI DeploymentEnterprise Data Privacy and SecurityAI Model Performance ParityAI Literacy and Employee TrainingIntellectual Property Protection in AI EraOpen-Source vs. Commercial AI ModelsVoice Cloning and Deepfake FraudOn-Premise AI InfrastructureGenerative AI Adoption BarriersAI-Generated Video TechnologyData Exploitation by AI CompaniesPrompt Engineering Best PracticesEnterprise AI StrategyCompetitive Advantage Through AIAI Governance and Compliance
Companies
OpenAI
Discussed as primary public generative AI provider; ChatGPT launch (Nov 30, 2022) marked mainstream AI adoption moment
ITERNAL
John Byron Handy IV's company; provides enterprise-secure AI platform for video generation and private generative AI ...
Anthropic
Tier-one AI model provider; Opus 4.6 model cited as cutting-edge benchmark for comparing open-source model performance
Google
Discussed as public AI provider (Gemini) and historical example of data monetization through advertising and user pro...
Meta
Praised for open-sourcing AI models, contrasted with other tier-one providers keeping models proprietary
Facebook
Referenced as example of 'if you're not paying for the product, you are the product' data exploitation model
Target
Historical case study of data analytics predicting pregnancy before customer awareness through buying pattern analysis
Tesla
Elon Musk's company referenced in context of open-source philosophy and competitive advantage strategy
Intel
Mentioned as host's former employer and context for early computing/workstation experience
Adobe
Software mentioned for advanced visual effects work (After Effects) used in deepfake and video production
Autodesk Maya
3D rendering software referenced for advanced production work and AI video generation techniques
People
John Byron Handy IV
CEO of ITERNAL; filmmaker and entrepreneur discussing private AI solutions and enterprise data security challenges
Dr. Daren
Host of Embracing Digital Transformation; Chief Enterprise Architect conducting interview on generative AI privacy risks
Elon Musk
Referenced for open-source philosophy and competitive advantage strategy; net worth cited as $800 billion
Quotes
"I think that the privacy and the data security now more so than ever is essential because what we're also finding is that a lot of these large model development companies are very hungry for the data."
John Byron Handy IVOpening segment
"If you're not paying for the product, then you are the product."
John Byron Handy IVMid-episode
"The AI strategy blueprint, it's a 10 20 70 problem. The algorithms, the technology, that's 10% of the equation. 20% of the equation is that AI infrastructure. But the most important bit, the 70%, that's the human element."
John Byron Handy IVLate episode
"Just because you want something to be private doesn't mean it's a bad thing. It just means that it's between you and your family or your loved ones or people that are not the general public."
John Byron Handy IVMid-episode
"Most people, if they could just save an hour a day writing emails faster or writing their proposals faster, that'd be a game changer for the business and for their personal lives."
John Byron Handy IVLate episode
Full Transcript
But I think that the privacy and the data security now more so than ever is essential because what we're also finding is that a lot of these large model development companies are very hungry for the data. Welcome to embracing digital transformation where we explore how people process policy and technology drive effective change. This is Dr. Daren, chief enterprise architect, educator, author, and most importantly your host. On this episode, the dirty secret of public gen AI. It wants your data with John Byron Handy IV, filmmaker, entrepreneur, and CEO of ITERNAL. John, welcome to the show. Thank you for having me, Daren. Hey, we've talked a couple times. I've taken a look at a lot of the things that you do. Good stuff, really good stuff. You're right at the forefront of this gen AI revolution that we're seeing. I'm not calling it an evolution. I think, no, this is most of a revolution. It's changing everything, not gradually. It's happening so fast. People's heads are snapping back. Before we dive into all of that, and everyone knows that listens to my show that I only have superheroes on the show and every superhero has a background story. John, what's your background story? What's your secret identity? Well, would you believe if I told you that I went to film school? Oh, really? Yes, this is going to be a fun background story for you, Daren, and everybody that's listening, too. I started making movies when I was six years old. I fell in love with it. Throughout my journey in school years, I had some phenomenal teachers that allowed me to pursue that passion. What ended up happening is, as I went through third grade and fourth grade, a lot of the school projects that I was able to submit instead of writing essays were actually in video form. I had gone through this journey over many years to the point where, when I was in high school, I thought I wanted to be a Hollywood film director making epic action movies because I loved the explosions. I loved creating things. Of course. Who doesn't want to do that? That would be awesome. That was my passion. I was big into martial arts. I studied martial arts for about 20 years, third degree black belt and that. A lot of fight scenes and things like that. What I realized is it was a very expensive passion. Making these movies was, at the time, very costly. The camera equipment, the set decorations, the props, the locations, it cost a lot of money. I had to figure out a way to make the money to fund my passion. And so I ended up starting what became the top corporate film production company here in Austin. I did that for about a decade. We were making the corporate videos, advertisements, the things that you see on people's websites. But that paid for the cool stuff. All right. So one thing about that before you continue on, no criticizing any of my production quality of this podcast. Just to set it right there. Okay. Because this is Darin. It's all about the content. Yeah. Okay. There you go. Content. Things that are learned. Yeah. So, you know, especially nowadays, right? It's so cool to see the ability where even with an iPhone, right? You can have an iPhone and a microphone. If you have a good story, that's what's going to capture people's interest. And I think that you've done a phenomenal job of showcasing what's in the industry, what's happening, and that's what matters. But I think your production quality is pretty solid too. Okay. Well, thank you. Thank you. So, all right. So how do you move from running corporate production company for, you know, training film? You probably did training films and commercials and all that stuff, right? Yeah, I did. So I was doing all of that to fund the passion side, which was making epic cinematic music videos. So not like where people are playing instruments, but like, you know, not that I'm a fan of the music, but like, you know, a Taylor Swift or Kanye West or Pick Your Big Artist, right? The kind of productions that they were doing. But they did great. That was the stuff that I like to do. Oh, very cool. It was cinematic, right? And so I was doing more and more of the corporate work to fund that. I won a bunch of awards. But what ended up happening is, you know, my brain was constantly looking for ways to optimize the production efficiency. And I did an analysis on the corporate work that we were doing. And what I found was that 60% of a production, on average, it would be about 10 hours of work to do one of these productions and for a three minute video. And I got paid very good money for that. It's funded my expensive hobby, but, you know, six hours was logistics, 60%. So that was packing the vehicle, you know, packing the camera equipment, driving to set, setting it up, tearing it down, packing back in the car, taking it home, unpacking. 60% of the time was that because the executives that I was filming, it had to be perfect, right? And it's not like we could forget the microphone or that one cable. Right, right. Yeah, their time is valuable, right? Yeah. We had backups, redundancies, and we delivered. And we never had a shoot go bad in the 10 years that I was doing it. I was just, you know, I was very proud of me and my team for that. But what I found was that if we could optimize that logistical capacity, then our, you know, cost per hour or our fee per hour would dramatically increase. And this was around 2017 timeframe when AI deepfakes were just becoming a thing. And, you know, I was self taught Adobe After Effects. I was doing, you know, this advanced stuff, 3D rendering and Maya and all these applications as a young kid. And so I was very familiar with like what went into that. I was building my own computers. My first ever workstation was on Intel. Of course, it should be. Contributions there. Yeah. But, you know, through that journey, I saw that the deep fake technology is becoming something eventually, right? Not at the time, but when it first came out, it was like, wow, this is this, this has potential. And, and so what I realized is this concept of filming somebody in person at some point in the future would go away. Right. Instead of having to sit down in front of a camera and record a physical executive, you could have an AI generated version of that. And that would be really cool. And I didn't quite know when that was going to happen. But I saw the future. And I thought, well, that's cool. But what really needs to happen for that to be widely adopted is you need a platform that can support the security, the privacy and the compliance associated with recreating some, you know, important executives presence, right? And if we did a lot of work with public companies. And so, you know, naturally, if you're recreating the CEO of one of those companies, if that likeness got out into the wrong hands, that could be a huge risk, right? Could tank the price of the stock just because of false information. And so I said, if we could wrap that, that very cool technology in something that would be enterprise secure and scalable, that'd be a really cool business. And so that's when I launched Eternal Technologies. And we started out as being the platform for what eventually would become AI generated video. And, you know, now fast forward, gosh, almost, you know, nine years later, since that deep fake moment, we're actually at the point where we're starting to get super believable, generative AI video content. And it's just making the younger version of me so excited to see how much it's transformed. But over the years, you know, we've been doing this for a long time now, seven and a half years ago, I started the business. So we branched off into a lot of different areas doing a lot of different things, listening to our customers. And it's been an amazing, exciting journey. And I'm sure we'll touch on that a little bit later. But that's, that's how we got here. That is amazing. That is amazing, right? I mean, you're the first film production guy I've had on. I have had actors on, I've had opera singers on, I've had people climb Mount Everest, and even run on seven continents, seven marathons in seven days. So, but yours is unique. You were right, John. You said it was going to be unique, it gets unique. It's an interesting journey because you come from a very artistic, but you're a different kind of artist. You're an artist that is about production, it's about getting things done more than it sounds like than the art of it, right? You kind of, because you're a technologist too, I can see that. So you're a unique person that way. I think that's pretty cool. So my question comes in, you came from, I want to protect the privacy of my executives, because that's your customer base, to where we're at today with Gen AI just running wild, crazy everywhere. Are those same concerns still there, especially now with public Gen AI so widely available? Do you have the same concerns with data privacy as you did before, or has it shifted and changed? That's a great question. I think it's shifted in a good way, but also in a way that there's, there is an article and a graphic that was put out this last week showing AI adoption across the world population. And the key takeaway is that only a fraction of humans in the world use a Gen AI software today, like a chat GBT, a Claude, a Grock, a Google Gemini. It's like somewhere around 3%, 200-ish million people, give or take. Maybe it's a little bit more than that, but generally speaking, it's a very small percentage. Pretty small. And of those, the number of people that are doing what we would call advanced generative AI is less than 1%. That's somebody that's paying for one of these tools, not just using the free ones. It is such a small subset. Now, that's just humans, right? That's not looking at businesses and what's going on there, but it's still very, very small. And so while the enterprise is still, I would say, in the population in general, still very behind, right? There's a lot of news, there's a lot of excitement, a lot of buzz, a huge amount of potential. But like the general population, a lot of them still haven't touched it or they're skeptical about it. There are a variety of different barriers. But what has been fascinating to see is the level of awareness around, hey, this is an AI-generated video clip. It is actually quite high. There are a lot of people that are probably on Facebook that will see it and they'll think that it's a real clip, but it's also very believable. But there's also a large and growing number of people that were already skeptical about believing what they're seeing. And now, I think that you're seeing even more general awareness that there are AI videos of presidential candidates. There are AI videos of animals. There are AI videos of movie stars, right? All these different things and people are kind of understanding that that exists now. And so I think that there's a level of skepticism just over what we've seen transpire over the last like five years in regards to social media, the algorithms, things like that, that have given people an awareness that they didn't have when this deep fake stuff came out. So I was really scared about where it would go. And that's one of the reasons I built this platform. But I think that we're starting to see the public become more engaged and aware. And that element of it is very encouraging to me. But I think that the privacy and the data security now more so than ever is essential. Because what we're also finding is that a lot of these large model development companies are very hungry for the data. Yeah, yeah, they are. Right? So that's a big deal. So why for the good of humanity, why doesn't everyone just hand over all their data to these big public gen AI? Well, I think, I mean, let's talk a little bit about that because open AI would not have made the inroads that they did if it wasn't for all the data they gained access to through their academic pursuits. Right? I mean, it was very well documented. They never, we never would have had the November 30th, 2022 moment in Chatchapiti 3 launched and in the world now understood what Gen AI could really do. So why not give it even more data to help humanity along get even a better Gen AI? That's an argument I hear them say. Right? So why, I mean, why is that argument falling on deaf ears? Because I'm seeing a lot of enterprises right now, they're locking everything down. They're like, no one can access public Gen AI from inside the walls of the company. Well, I think that there are a couple of facets to that. Some are uniquely human. Why don't we just open source every trade secret in the world right now? Right? Well, that's what Elon says to do, right? Elon says, he does. Actually, he's working out pretty good for him, but it's working out great for him. He's got worth $800 billion now. I mean, but it's one of those things where it's like, well, if you don't believe in competitive advantage, then why aren't your models open source? You know, you're right. The big boys don't open source their models. It's the meta open source to all their models, which was brilliant. By the way, I think that has improved everything across the boards. But the other guys haven't really open sourced their full models yet. Yeah. And the interesting thing is they haven't even open sourced the older ones. No, no. And the one open source model that open AI put out is crippled and has a lot of issues. And it was like pretty cheaply assembled based on the information and the deep dives that I've read around people who actually pulled the model apart and looked at kind of what that training data set was. They reverse engineered the training data set, among other things. So I think that going back to your question, though, there's information that makes us unique. And we're in an interesting time in that the common phrase, if you're not paying for the product, then you are the product. That was the classic one with Facebook. That was the classic one with Google. Why is this? Google, yeah. It's because of the data and the advertising that they can sell back to you by knowing everything about you. There's a fascinating document that was put out, it was described in a book later on, whereby Target knew if a woman was pregnant, before the woman knew that she was pregnant. This was back in 2013. I remember here. I remember reading this report. Yeah. It was based on their buying patterns and they had enough data to notify or not notify, but sell to that person in advance of them actually even knowing that they were pregnant. And so things like that are kind of scary. It's fascinating, but these are patterns. But I think that there's something that's special and unique about the concept of privacy. And just because you want something to be private doesn't mean it's a bad thing. It just means that it's between you and your family or your loved ones or people that are not the general public. And that could be an innovation. That could be an idea. That could be a fear. That could be a piece of exciting news. And to hand all of that over to any company immediately gives them control of you piece by piece. And it doesn't matter what the ethics or the morals of the company is, because whether it's that company doing something bad with the data or individuality in the world, but if you have a bad actor and that bad actor somehow gets access to that data, now you're compromised even though the company that you originally trusted was pure and good and true. And now that opens you up to exploitation, manipulation, fraud, abuse, blackmail. Right? And not even for something bad. It could just be some personal thing. Oh, we discovered that Darren, you're worried about some friend that is going through a bad time. And then you have a bad actor that clones that friend's voice. That's happening today. There's phone fraud now where it clones the voice and they call you and say, Darren, I'm in a bad way. I'm stuck. I need you to send me $5,000. And you love the person, they're closest person to you. Of course, yeah, they're good for it. They'll pay you back. You just got to take care of them right now. And it's fraud. Right? And that's all because that data left your control. Now, it's not quite the same individual versus company. But that same kind of thing can be applied to the company. Right? If you're in a competitive market, your data, your intellectual property, the innovations that you have are what keep you alive. And if you're turning that over to a third party, there's risk. So that explains a lot why we're starting to see enterprises kind of block all of that. But at the same time, they're now not able to take advantage of all of that great power that these GNAIs have. Right? Because it's amazing what they can do. It truly is. So I don't want to leave that on the table. So how much am I willing to risk to do that is in the equations. And I think at first, people are like, I'm willing to risk a lot, but they're not seeing the return on investment. So now they've locked everything down and said, it's not worth it. Sleepy cat videos or dancing cat videos just aren't worth my intellectual property going out the door. I think that's what's happening. So what alternatives do they have? This is where we need to talk about private GNAI. It's always been a big thing for me that, hey, we need something out there. But a model by itself is what we're finding is not sufficient because the private or the public GNAIs have moved beyond just hosting a model. They have really complex workflows and a whole bunch of really cool things that are going on on the back end that give you even better responses. How in the world are we going to keep up with that? That's a great open-ended question there, Darren. That's a softball. That's called a softball, John. There you go. So I think that it's a couplefold. So again, only a small percentage of the population today is able to use these AI solutions. 200 million people give or take is around 2% to 3% of all people. 10% is 800 million for context. So with that in mind, a lot of people still don't know how to use the tools, even the ones that are using them, those 200 million, right? Most think that AI is like a Google search because we were learned and we learned and we were trained by Google for 20, 30 years. You type in your keywords, you press enter, and then you read through the results and you filter and you pick out the link that you think is good versus bad. Right. And that's not a wrong way of doing it, but what it did is this concept of AI turned that on its head. And now instead of putting in two or three keywords or five keywords like you were trained to do for 20 years, it actually turned into something a whole lot more human, which is you describe what you're looking for like in detail. You say, oh, I'm looking for, I was actually doing an AI training earlier today and I gave this exact example and we actually went to GROC and we typed it in because it was public research. The idea was there was this cool study that came out about reducing like by 50% the plaque in the brain for Alzheimer's patients through sound waves. And if you administer a 40 hertz frequency, you can like break up this plaque and some other things. And so I'd seen like a couple of posts on X about this. And I thought that would be a great example of how to show that these people I was training, how you can use something like one of these deep research AIs to go off and crawl thousands of sources and find the information and return something back. So when I described this and I typed it in live for them on the call, right, I didn't go 40 hertz brain Alzheimer's go, right? I said, find me the research paper that talked about this application of 40 hertz sound frequencies to the brain that reduced the plaque for Alzheimer's patients. And then I followed that up by saying, then find me the link to the original study. And in addition to that, describe how the sound waves were or how the frequency was emitted and distributed. Was it through sound? Was it through like actual vibration? Or was it like, you know, speakers that were like, you know, blasting sound waves, right? And so I described that in a couple of other things. And then I pressed go, you wait like 60 seconds, it pulled through like 300 sources, and it came back with something. And it was just one of those aha moments for people where it's like, wow, I can actually be really specific. I can tell you exactly what I want. And it'll go off and it'll find the answer and versus a Google search where I would have had to spend, you know, 20, 30 minutes reading through that first page. And then to your point, ours, if I have to go to page two or page three, right? And so, you know, the first thing, whether you're a public AI or private AI, knowing how to talk to the AI is so important. And I actually just wrote a new book on this topic and how businesses can enable themselves and their teams to become more AI literate. Because what we're seeing is these AI technologies are moving so fast that if an organization cannot go people first and train their people, they can have the coolest technology in the world, right? But if they can't use it, then there's not going to be that value. So to bring it all home now, in regards to how the private stuff keeps up with the public stuff, the distinct organizational advantage that a company has over an individual user is they have the potential for leadership, training, education and literacy that an individual person may not have unless they're self motivated. Meaning an organization, a company, can roll out a Gen AI training program to all of their employees and give them access to the skills. And it's just little nuggets of knowledge like that, like don't type five keywords into your AI and expect a good results, put in a paragraph and things like that that can suddenly elevate and maximize so that even if the latest innovative cloud technology is out there and doing great things, you can still have a competitive advantage across your employee base with something secure and local by combining really good knowledge and know how about how to use these AIs through that training and education. And you pair it with working with collaborative partners, whether it's a software vendor that provides those on-prem AI technologies to mimic the cloud. We have some like that that are completely offline, on-prem, secured, locked down that do similar things like a chat GPT or a co-pilot. But beyond that, you can have the best tool, but if you don't know how to use it, it's not going to matter. So I like what you said there because it's not the tool by itself, it's the application of the tool in which requires training, it requires rethinking about how am I going to approach business problems differently because now I've got another tool that's there. Just like we had to shift when the internet became widely available for business before that. And you don't even know this because you're so young, John. But before that, they used to carry like memos around. Yeah, that part. Yeah, you had the mail room, that was a real thing. And it still was, even up until COVID hit, I still had a mail stop at Intel. I still, every once in a while, I get an email that told me, hey, someone sent you real mail. But I've seen a major shift in that. So this is another major shift. We've got to rethink the way that we do business. And private gen AI can give me that privacy that I need. So I'm not relying on a third party for some intellectual property that I want to keep in house. I understand that. What about the argument that people have been saying, can it be as effective running private gen AI as the public ones? Is it unparity or is it so far off, it doesn't matter? I think there's a middle ground there. So open source is, at least right now, still somewhat slightly behind what the tier one model providers are doing. But it's open source. What do you expect? That gap has shrunk significantly, just even in the last couple of months. The new models that have come out, as of recording this, were midway almost towards the end of February, 2026. The two big models that have come out, GLM5, which is out of China, and Kimmy 2.5, which is also out of China, which is a whole other problem. Those two models are pretty much almost completely unparity with Anthropics Opus 4.6 model, which is the latest cutting edge model. Now, the operating theory is that it was actually spies that have infiltrated and pulled the technology out. But regardless, the fact is, these models are now open source and they're available to the mass public, whether you're an individual that wants to run them for yourself on your technology or you're running it as a company. There are models that exist that are, say, worst case scenario, 90% of what the tier ones can do. Best case scenario, it's like 98% what the tier ones can do. But even at 90%, it might be, or even maybe that 50%, it might be valuable enough. 100%. And that's exactly where I was going with it. Because how often do I need a deep research to return things in Shakespearean sonnets? Never. But the big models can do that sort of stuff. They can do so, because they're so generalized, they can do so much stuff. But I think we can focus these private gen AIs onto specific problems and have them focus and do an incredible job at a lot lower cost. Absolutely. Yeah. Okay. So that's reality. You think that's reality what I just said? Yeah, for sure. Because, and again, keep in mind, what I'm speaking to is the top tier you want to do AI generated code to build applications with doing zero software development, which exists, by the way, now. I'm not a technical software developer by trade, but I've spent, seven and a half years leading a team of AI developers, and I can speak product development to the point where I can prompt the AI and build a very secure, very scalable product in days. That's amazing. Yeah. It's ridiculous. But to your point, most people aren't going to go that far down the rabbit hole, nor do they need to. And so the fact is, if open source can touch that high level of quality, then they will absolutely be able to do those general purpose business tasks that you're talking about. And what it really becomes is, I write this in my book, the AI strategy blueprint, it's a 10 2070 problem. And people typically think, well, it's 100% technology. Well, it's not right. The algorithms, the technology, that's 10% of the equation. 20% of the equation is that AI infrastructure, you have the right hardware, do you have the right compute, can you run the thing? But the most important bit, the 70%, that's the human element. I love it. And it's do the humans that are using these AI, whether it's a private gen AI, or any AI for that matter, do they know how to effectively communicate and prompt to get the response that you're looking for. And even the models from a year and two years ago, for most general purpose business tasks, like writing an email, drafting a cover letter, they work, like it's here, it's not a compute problem anymore, it's not a model problem anymore. It's simply an AI literacy and education problem for the majority of tasks. And yes, there's always going to be that 1% that are going to need the best cutting edge to push the boundary, build, you know, a full stack product in two days versus two years. Literally, that's what we're doing now internally. But that's like such a small percentage of the general workforce, right? Most people, if they could just save, you know, an hour a day writing emails faster or writing their proposals faster, that'd be a game changer for the business and for their personal lives. So I think there's a lot of opportunity. So, so John, we're out of time. So if people want to find out more about all this, where do they go? And how do they get in contact with you? Well, first thing to do, I'm on LinkedIn. So, you know, feel free to reach out to me on LinkedIn. But we've got a lot of great resources on our website, which is eternal.ai. It's I-T-E-R-N-A-L.ai. You can find out all about things we're doing and the stuff we're talking about. That's awesome, John. Hey, thanks for coming on the show. And we'll have you back on because we've got more to talk about. Thanks, Derek. Really appreciate it. Thanks for listening to Embracing Digital Transformation. If you enjoyed today's conversation, give us five stars on your favorite podcasting app or on YouTube. It really helps others discover the show. If you want to go deeper, join our exclusive community at patreon.com slash Embracing Digital, where we share bonus content and you can always connect with other change makers like yourself. You can always find more resources at embracingdigital.org. Until next time, keep embracing the digital transformation.