Everyday AI Podcast – An AI and ChatGPT Podcast

Ep 747: Responsible AI Playbook: What It Means and 5 Moves to Ensure Your AI Strategy Survives (Start Here Series Vol 17)

27 min
Apr 2, 202617 days ago
Listen to Episode
Summary

This episode outlines a five-move responsible AI playbook to help companies build trust and scale AI safely. It explains how responsible AI differs from ethical AI and governance, identifies the trust crisis and regulatory risks companies face, and provides actionable steps to audit systems, assign accountability, test for bias, implement expert oversight, and use transparency as a competitive advantage.

Insights
  • Responsible AI is the operational framework that turns ethical principles into practice—it's the 'how' after ethics answers 'should we'
  • Only 30% of organizations have reached mature AI governance, leaving most companies vulnerable to lawsuits, regulation, and consumer distrust
  • Companies investing heavily in responsible AI report over 5% profit impact, proving it's a business accelerator, not just a compliance checkbox
  • The EU AI Act enforcement begins August 2026 with fines up to 35M euros or 7% of global revenue for non-compliance on high-risk AI
  • Transparency about AI use is becoming a competitive advantage as consumer distrust rises—brands that prove authenticity will win market share
Trends
Consumer trust crisis: 50% of consumers now question authenticity of almost everything online, with this segment growing rapidlyRegulatory convergence: State-level AI hiring laws (CA, IL, NY, CO) are proliferating ahead of federal action; EU AI Act enforcement imminentLitigation shift: Courts rejecting 'the algorithm did it' defense; companies now liable for AI discrimination in hiring, lending, healthcareIP exposure for enterprises: Copyright lawsuits (e.g., Anthropic $1.5B settlement) creating liability for vendors using unlicensed training dataExpert-driven oversight replacing checkbox compliance: Leading companies treating responsible AI as strategic priority, not governance burdenAgentic AI governance gap: Shift from read-only LLMs to proactive AI agents creating new accountability and privacy risks by mid-2026Transparency as brand differentiation: Parallels to 'clean label' food movement; companies disclosing AI use gaining customer loyaltyGovernance maturity as scaling bottleneck: Responsible AI frameworks becoming prerequisite to move beyond pilot stage at enterprise level
Topics
Responsible AI Definition and FrameworkAI Ethics vs Responsible AI vs AI GovernanceAlgorithm Bias Detection and MitigationAI Transparency and ExplainabilityHuman Accountability in AI SystemsData Privacy and Security in AIAI Safety and ReliabilityEU AI Act Compliance and EnforcementAI-Related Litigation and Legal RiskState-Level AI Regulation (Hiring)Copyright and IP Liability in AIConsumer Trust and Synthetic MediaExpert-Driven AI OversightAI Governance Maturity ModelsAgentic AI Risk Management
Companies
OpenAI
Enterprise AI tool provider; host discusses whether companies using it face IP and liability exposure
Google
Offers Gemini enterprise AI tool; mentioned as vendor companies must evaluate for compliance risk
Microsoft
Provides Copilot enterprise AI tool; host questions liability exposure for companies using it
Anthropic
Paid $1.5 billion copyright settlement with authors; example of IP liability risk in AI industry
Workday
Subject of Mobley v. Workday federal lawsuit for AI hiring discrimination; landmark case rejecting algorithmic defense
McKinsey
Research cited showing companies investing in responsible AI report 5%+ profit impact vs non-adopters
People
Jordan
Host of the Everyday AI Show; guides listeners through responsible AI playbook and framework
Quotes
"Responsible AI now determines whether companies can scale AI or just if they're going to stay stuck."
Jordan~2:30
"Ethics essentially is the question of should we do this? And responsible AI asks, well, how do we do it right?"
Jordan~8:00
"The algorithm did it is not a defense anymore."
Jordan~15:30
"Trust is that infrastructure. The ethics provides the values. Responsible AI provides the decisions and governance enforces both."
Jordan~38:00
"Transparency starts at home and then it is going to expand right to your neighborhood or your potential clients."
Jordan~36:00
Full Transcript
This is the Everyday AI Show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business, and everyday life. Right now, half of all consumers question the authenticity of almost everything they see online, and that segment is growing fast. And it's not just some of the things they see online, it's almost everything. And this is the world that your company is deploying AI into. And most companies have no system in place to prove their AI outputs are real, audited, or accountable. Actually, studies say that only 30% of organizations have even reached a mature level of AI governance. So what's all that mean? Well, the majority of companies are shoving AI down their employees and their consumers throats, but they have no way to verify almost anything that they create. All that AI text that AI graphics, the AI apps, is it real? And for consumers, that's why it's turned into a trust problem. But for companies, it's a responsible AI nightmare. And this isn't a future issue. This is a right now issue. And today, we're going to give you the responsible AI playbook to actually deal with it on everyday AI. Welcome. What's going on? Here is the big picture for today's show. And as we tackle responsible AI and give you the playbook, tell you what it means, and the five moves to ensure your strategy survives. So here's the big picture. Well, responsible AI now determines whether companies can scale AI or just if they're going to stay stuck. And we're talking it's kind of a point now where trust lawsuits and regulation and consumer confidence are just shaping AI's real business value. And most companies know the risks that exist, but they don't have systems in place to deal with them. And it's almost like everyone's just kind of ignoring them. Well, that's why many companies are treating responsible AI as well, just a checkbox that they may or may not check. But those that do and that are treating it properly with the respect and time that it deserves, well, they're pulling ahead right now. So on today's show, stick around for the next 20 minutes and here's what you're going to learn. So you're going to learn the real reason most companies can't get past the AI pilot stage. So we're going to see why 2026 became the year that AI trust, billion dollar lawsuits and regulation collided. And I'm going to give you at the end a five move responsible AI playbook that separates the leaders from everyone else. Let us get into it. This is the start here series on everyday AI. So if you're new here after 700 plus episodes, everyone's asking Jordan, where do I start on this journey? You have so many podcasts. Well, you start with the start here series. Preferably, if you can do it in order, this is now, what are we on volume 17, but start here series, it is the essential podcast series to both learn the basics and to double down on your AI knowledge. So if you haven't already, please go to start here series.com. That's going to give you free access to our exclusive inner circle community. You literally can't get access right now any other way. And then in the start here series space there inside our community, you can go listen and watch to watch every single start here series. They're fairly quick. It's all in there. And then you can connect with other business leaders who are trying to do the same. So if you missed last episode, that was volume 16 of our series. We went from chatbots to super agents and talked about the 11 AI categories explained. So make sure you go check that one out. But today we're talking about the responsible AI playbook, what it means and five moves to ensure your AI strategy survives. If you are really new here, this isn't an AI version of me or if you listen all the time, I just allergies now, right? I was sick for months and now I sound like this. So yeah. Anyways, I before we get into responsible AI, I want to talk a little bit on some definitions because it can actually get confusing because a lot of times people mistake or kind of crossover between ethical AI and responsible AI. So today we're going to be tackling responsible AI, which is different than ethical AI because I don't think there's really a blanket kind of statement that you can apply ethical AI because well, ethical AI defines the moral principles of what is right, fairness, safety and privacy. And depending on well, what country you're in, that's a big one. We have listeners from all over the world. But also a lot of well, if C-Sweets are being honest, they look at AI as a headcount thing, right? They're like, okay, well, if we deploy AI everywhere, we can reduce headcount by 30%. And for them, maybe that's an ethical decision, even though I would argue it's not. So we're not going to get a really tackle ethical AI today, but we're going to be talking about responsible AI. And that's the operational framework underneath ethical AI that puts those ethical principles into practice. And whether whatever those ethical principles are, responsible AI is kind of the framework that you put into place to make sure that those principles work. So ethics essentially is the question of should we do this? And responsible AI asks, well, how do we do it right? And then if you're like, wait, did you already cover this? No, we covered AI governance. So make sure to go check out volume 14 of the series that was episode 737. Essentially, ethics gives the values, tells you what's fair when it comes to AI safe and aligned. And then responsible AI, which we're going to be tackling today, that's how you turn those values into real decisions about use cases, oversight and data. And then governance, which we already covered, that provides the roles, rules and review systems that enforce responsible AI over time. So make sure after you listen to this episode, if you haven't already, make sure to go listen to volume 14. That's episode 737 on governance. So we're technically zooming out here, because we already gave you the, the governance playbook and the five governance rules. And this episode kind of explains why those rules exist. All right, we're going to start with five, and then we're going to end with five. So first, just to get this framework really in place, I want to first go over the five core pillars that hold up every responsible AI decision. So that's fairness, transparency or explainability, accountability, privacy or security, and then safety or reliability. So number one, fairness, that's how you actively how your company actively identifies and mitigates any algorithm bias for equitable outcomes, right? We have to understand that most large language models that we use are biased and we have to start tackling that issue. What systems do you have in place to make sure that your outcomes, whatever those are, those are text based outcomes that you put on your website, maybe they're, they play into your hiring practices, whatever it is, you need to understand that by default, AI is extremely biased. So how can you make sure that you are putting those systems in place to make sure that AI is equitable and fair? Number two, transparency or explainability. You need to understand how and why AI reaches its decisions. So if you've already listened to the start here series, you already know that because we've tackled everything from here's what a large language model is, how it works, you know, cutting through the buzzword and jargon. So go back and listen to that one. That'll teach you then three, accountability. That's how you establish clear human responsibility for AI actions and outcomes. And we did tackle that one a little bit more in governance, but essentially it's this, if the 10 second test, if something goes terribly wrong, if you have an agent crash in your organization and an agent goes and, you know, does something absolutely terrible within 10 seconds, if someone says, who's like, who's responsible for this? Can you with 100% certainty identify the one human being that is ultimately responsible for that? Yes or no, you need to understand the accountability chain just as it's as important as it is, you know, to have agentic observability and traceability. You also have to have that on the accountability side with your humans. Number four, privacy or security. And that's protecting personal data from misuse and adversarial attacks. And this becomes even more important as we get into agentic AI, right? So as we give AI agents the keys to the castle to make proactive decisions for us, right? Large language models are no longer reactive read only. They are now proactive read, right? They are making decisions on our behalf. Oftentimes without expert guardrails, right? You might have a catch all safety human in the loop that may or may not have any understanding of how to properly protect personal data. And then last but not least, number five, safety and reliability. You need to ensure in your company that AI performs as intended without causing harm. All right. And you might say, OK, well, how can AI cause harm? Well, if you're only thinking of, you know, using AI, like, oh, I just go to co-pilot and, you know, make my, you know, make my email sound better. That's that's not what I'm talking about here. Right. We are in the year 2026. And hopefully if you're an avid listener of the show, you are already deploying, right, agentic societies throughout your organization. So I think at that point, you will understand how AI, how it's important to make sure that your AI is making safe decisions that don't cause humans harm ultimately, right? Because as we give more and more agency away, right? And that's ultimately kind of this, this, I think, rough 18 month transition that we're going to be going through mid 2026 to the end of 2027 is human experts getting away their agency. Right. So you're like, OK, well, what do we do now? Well, you have to make sure that the systems that you deploy are safe and reliable. So why is responsible AI even important? Right. Well, we talked about it. It is the trust crisis. So right now about half of consumers, according to an eye-proof study, about half of consumers question the authenticity of almost everything they encounter online. And I agree. Right. We actually had a start here series. Let me go ahead and bring it up and tell you what episode it was. So this was episode 740. It's volume 15 of the start here series. Called everything is fake, right? And how your company can leverage human expertise to fight AI workslop. So consumers, I don't think most people see or understand this yet. And as the technology gets better, right? As I video gets better, as I images get better, as I audio in deep fakes, unfortunately, get better. That percentage of people questioning the authenticity is only going to rise, right? So synthetic media, deep fake fraud, all of those things just erode the baseline trust that people rely on, right? It used to be something like, you know, oh, you've gone the internet as an example. You go to Google page one, you probably have a lot of trust in that, right? If you went to page 20 and it's some random blog from some dude in the basement, you might look at that and be like, not sure about that, right? That page 20. Take that times 10, right? And that's going to be the commonplace level of distrust that we are going to see every single day. And that's what your, right? Whatever sector you're in, if you haven't started to experience yet, it is going to slap you, right? It's going to slap you. I think it's starting with social media, right? You're seeing more and more things and, you know, things that happen in real life. People are like, oh, that's fake, right? And then things that didn't happen in real life. People are like, oh, that's real, right? You're starting to see a play out on social media. It is going to play out in our everyday lives. And I think even things that people start to see with their own eyes, they're going to start to second guess those things as well, because it is going to be a, come a default to assume that everything you see is fake or AI generated, right? Whether it's on a screen or whether you hear, you know, something on the radio, right? Apparently, I'm 90 talking about the radio. I still love a radio, by the way. But customers are already shifting toward brands that can prove their outputs are authentic, and that's what you need to understand. And that's why responsible AI is so important. And well, aside from just consumer trust, well, you got to pay attention to the regulatory side as well, because current there's been a lot of lawsuits, but the Mobley versus Workday certified that as a collective action for AI hiring discrimination. So a federal court just said the algorithm did it is not a defense anymore. So your company, that's why you need to start paying attention to what you produce, right? So in this case, right, there's been a lot of different lawsuits. And I think the first ones that, you know, kind of rose to prominence are ones that wear algorithms, made decisions that discriminated against people for whatever reason, right? Whether it's hiring, getting a loan, you know, health care coverage, etc. And that's not new. That's been around for technically decades, but it's becoming more and more prevalent as more and more companies that are not necessarily machine learning companies, right? But they're just, you know, wrapping their data lake house around, you know, a large language model company. And then they're putting that into production. That doing that without having the proper safeguards, that is bypassing responsible AI, because now courts are saying that, well, no, you're on the hook for it. It's not the AI's fault. You know, so courts are rejecting any distinction between software decision makers and humans make. So, right, there's no difference between if I go and make a decision versus if the AI we're using makes a decision. And there's now state laws, as an example, in California, Illinois, New York and Colorado that regulate how you use AI in hiring. So we'll see, because we obviously know here in the U.S., at least there's some some federal momentum to try to not allow individual states to mandate AI. But there's already state laws. They are in place. They are real. And we're going to talk here in a minute about how it's actually getting bigger than that. AI moves too fast to follow, but you're expected to keep up. Otherwise, your career or company might lag behind while AI native competitors leap ahead. But you don't have 10 hours a day to understand it all. That's what I do for you. But after 700 plus episodes of everyday AI, the most common questions I get is, where do I start? That's why we created the Start Here series, an ongoing podcast series of more than a dozen episodes you can listen to in order. It covers the basics for beginners and sharpens the skills of AI champions pushing their companies forward. In the ongoing series, we explain complex trends in simple language that you can turn into action. There's three ways to jump in. Number one, go scroll back to the first one in episode 691. Number two, tap the link in your show notes at any time for the Start Here series, or you can just go to start here series dot com, which also gives you free access to our inner circle community, where you can connect with other business leaders doing the same. The Start Here series will slow down the pace of AI so you can get ahead. And yeah, it can be costly. It's not just about maybe getting it wrong and maybe not winning back, you know, trust or winning trust from a potential customer. It is real money, right? Anthropic, obviously, one of the now one of the largest companies in the world. Anthropic paid a $1.5 billion suit to settle a copyright case with authors. And courts treat AI outputs as relevant to whether they can compete with copyrighted works. So right now, enterprise users are facing real IP exposure if they're vendors trained on unlicensed data. So yeah, you need to understand, well, who are you using open AI, Google, Gemini, Microsoft co-pilot, and are you on the hook? Right? Are you on the hook? If something that you create, right? Let's say a, I don't know, a new piece of software, right? And it's, it's something related to health and you're using one of these enterprise tools, but you're using AI to help you build this, right? If you get in trouble, do you have protection? Do you know these are questions that you have to have? It starts, right? It starts at the boardroom, but then it has to trickle down to every single major decision maker in your organization. You have to be aware of those things. And I think we may start to see some of this at the fan and it's a little bit of a fan in August. Now, the rollout of the EU AI Act couldn't get pushed back again. It's already got pushed back a couple of times, but bookmark it August 2026. That is when the EU AI Act enforcement begins for high risk AI in hiring credit in biometrics. So essentially, if your company and you're like, OK, why is this matter? Well, if your company does business in the EU, if they're offering business in the EU, and you're using AI in that process, which is probably a good majority of our listeners. This applies to you and noncompliance with these high risk AI rules in the EU. I and the EU AI Act carries fines of up to 35 million euros or here's here's a wild one, right? If you're a large company, you're like, OK, 35 million euros. Right. That's our catering costs or 7% of global revenue. Oh, now it's got your attention. Right. So if you have completely bypassed responsible AI. In governance, right? But that's later down the line. Right. If you've bypassed responsible AI, the clock is quite literally ticking. Hey, we'll see if anything like this ever catches on in the US. I think maybe at the state level, it will. But then like I've talked about on the show, if a federal judge throws it out, well, that will be the deciding factor, right? It's probably going to get litigated at some point. But if you do business in Europe, which is many, many companies and you're using AI, you could be on the hook. If you go, if you are found noncompliant with any of those high risk AI rules. It's not just about enforcement. It's not just about doing, well, we better be responsible with AI so we don't get in trouble. No, being responsible. Yes, it creates trust. But responsible AI leads to higher ROI on AI. So according to McKinsey, companies that are investing heavily in responsible AI report over a 5% profit impact versus those companies that aren't. And senior leadership involvement in AI governance drives significantly greater business value. So if your business leaders are heavily involved in the process and they understand the basics of responsible AI, they're making those decisions from top to bottom. It's not just about not getting in trouble, right? This is foundational for growing your organization because it unlocks real scale and real returns. So let's start to wrap with this. We started with the five pillars that kind of helped define what responsible AI is. Now, here is the playbook, the five moves to ensure your responsible AI strategy survives. And again, make sure you go check out the governance episode because a lot of these things are aligned. But here we go. Rule number one, know what you have. You need to audit every AI system and classify each one by risk level. The cool thing is the EU AI Act actually put those risk levels out. Number two, assign real ownership. You need to be able to name the individual with the authority, budget and accountability. That 10 second rule when someone goes wrong, who is the human responsible? You have to know with the utmost certainty. Number three, you need to audit for bias before the courts do. You need to test your AI tools against your actual data in to see, well, is there any lease that could cause legal troubles? Whether it's against the EU high risk or any of the state laws here in the US. Number four, you need to build expert driven oversight. If you listen to the show, you know, I hate human in the loop. You need to have expert driven oversight. You need to have proactively your domain experts driving your AI responsible AI policy. So that is domain professionals reviewing outputs, not just well, Bill and it rubber stamping your new agentic workforce. And then number five, you need to trade treat transparency as a competitive advantage. You should be disclosing when and how AI is involved, right? Not just to your consumers, but to your employees, to your stakeholders, both internal and external, right? When we saw that consumers are starting to make decisions on those companies that are transparent in AI, right? Like you kind of think back, I think that there's been this, this kind of boom. Maybe it's a generational thing. I don't know, right? But when you think of like organic and, you know, food with clean ingredients, right? You can see over the last 10 years, how these type of brands have exploded because they're being transparent about what you are consuming. And they're saying, what's real? What's fake? Think of the same thing, right? With what you are putting out there to the world, with what your company is putting out there to the world. Trust me, the trice, the trust crisis has barely started. And it is a much more significant issue than most people realize. And it's something that if you are like, well, we'll worry about this when we know it's a problem, when we hear rumblings of it, it's going to be too late at that point, right? Because your competitors are already going to be doing this. So I think the fifth one here is huge. You have to become overly transparent now as a competitive advantage. And again, it's not just how you communicate these things externally or to, you know, external stakeholders to currents, clients, potential customers, etc. It's also how you communicate these things in your own organization, right? How do you deploy AI agents? Do all of your staff know? Do all of your leaders know? Do frontline workers know how these decisions are ultimately being made? Would they go click that, you know, agent run? Probably not. Transparency starts at home and then it is going to expand right to your neighborhood or your potential clients. So as we wrap here, here's what I want to leave you with. The same thing with governance, right? The governance are the rules in which you carry this out, but responsible AI, right? When you think about ethics, what's right, what's wrong? And then you think about how we are actually, you know, the foundational rules that help set this. Remember that responsible AI turns your company's AI values into real decisions. It is the foundation on how you look at use cases, data and oversight. And it also sets the pace for how you will ultimately govern AI. But responsible AI helps you technically go faster. In the same way, people think governance slows your AI efforts down. Well, responsible AI doesn't slow your efforts down either because it builds trust and trust builds the roads. It builds the highway for speed, right? Without those lanes of responsible AI, you're just going to have your cars going all over the place, right? So trust is that infrastructure. The ethics provides the values. Responsible AIs provide the decisions in governance and forces both the companies that are earning trust. Now, while distrust is on the rise, we'll own the competitive advantage in 2027. All right, that's a wrap y'all. Thanks for sticking with me as I can barely, you know, talk. It feels like my nose is plugged. Hopefully at one point here, I'm not going to be sick or have crazy allergies. So you can all actually hear me, but thank you for sticking around. And I hope this was helpful as we tackled the responsible AI playbook. And if this was helpful, make sure you go to starthereseries.com. That is going to give you free access to our exclusive inner circle community. You're not going to find it anywhere else. And if you haven't already, please do me a favor, go subscribe to the podcast on Spotify or Apple. Leave us a rating if you could. I'd appreciate that. Thank you for tuning in. Hope to see you back tomorrow and every day for more every day. Thanks y'all.