I think because for the first time, citizens feel that perhaps there are humans that have been taken out of decision loops and decision cycles, and that that creates real fear around what the citizen's right of recourse or appeal is. Welcome to Embracing Digital Transformation, where we explore how people, process, policy and technology drive effective change. This is Dr. Darren, Chief Enterprise Architect, educator, author, and most importantly, your host. On this episode, can AI strengthen democracy, the future of government services, with special guest Catherine Friday, Global Managing Partner at EY. Catherine, welcome to the show. Thank you so much for having me, Darren. It's a thrill to be here. hey so katherine everyone that comes on my show knows that um there's everyone everyone that comes on my show is a superhero and every superhero has a background story so katherine what's your secret identity no i won't ask your secret identity but what's your background story wow darren thank you so much it's it's a great question and i feel really honored to be amongst the pantheon of great speakers that you've had on before. I certainly would be very reluctant to consider myself one of the greats. But in terms of my background story and a little bit about what I do and why I love doing it, it is my great joy and privilege to be leading EY's government and infrastructure practice globally. We're a team of around 40,000 people around the world who show up every day in the delivery of long-term public value. And that is something that we are all deeply passionate about. And it's something that speaks deeply to my own core values. And I love the fact that I have the opportunity to use the phenomenal platform that is EY to bring groups of people together to create legacies in human and social and citizen interest that hopefully endure for generations to come. that's the sort of work that really lights my fire um outside of ey um i also have the great good fortune to be a director of melbourne park perhaps most famous as the home of the australian open which a number of your listeners may recently have tuned into and and and seen yeah um and so again that's that's another way that i have the opportunity to i guess bring my purpose to life a little bit to, in that instance, literally be part of creating infrastructure legacies that, you know, provide a place where so many different people from so many different walks of life in different countries can come together and celebrate things that they all love and care deeply about, whether it is sport in the Australian Open, whether it is live music or live entertainment or dance or theatre or whatever it is. Melbourne Park is the largest sport and entertainment precinct in the Southern Hemisphere. So we get to play host to some phenomenal experiences that are all about community building. Yeah, that's exactly right. It's lots and lots of fun. So I guess to the extent that I have a superpower, hopefully it's bringing people together in the service and creation of long-term public value. okay so i want to talk a little bit about that because um it's unique most of the time when we talk to people there they all talk about digital stuff and we talk about software and we talk about um it and things like this you're really talking about the physical world and its interaction with constituents with people but there's a digital backplane to that and especially with ai raising its head in this. There's a lot going on in that space. What do you see is different in this space than with like a traditional IT space? What's unique about it? It's a fascinating, it's a fascinating space, Darren. The point that you make is absolutely right. That digital infrastructure is critical infrastructure in exactly the same way that a lot of our physical infrastructure as well. And increasingly, of course, citizens are critically reliant and crucially reliant upon this digital infrastructure, which increasingly includes AI, to access fairness and equity and justice and the sorts of services that all of us as citizens of our nations need. We often don't even notice that we need them until those moments that we do, But when we do, we expect our governments to meet us at our point of need and to do so in ways that are clear and fair and transparent and where perhaps we have no course of recourse or appeal. And I think one of the real challenges that governments have with AI is that it is happening so fast that governments are really challenged with both how do they deploy AI internally within their own departments and bureaucracies and agencies at the same time that they think about how they deploy this critical infrastructure in the service of citizens and constituents as well. And I don't think that most governments have a really clear public narrative about that yet. So, I mean, what's missing from that narrative? Why is that such an issue for them? I mean, it should be. I mean, they've been serving their constituents for decades, hundreds of years in some cases. Why is there such a disconnect? Why do you think that is? and there is a genuine and legitimate concern that decisions about my life and my livelihood and the service access that I get and my family gets are all being made by agents that are sort of beyond human oversight and control. And, of course, that's not true at all, but that is the opportunity that governments have right now. Exactly. That's exactly right. But I think this sort of fear and distrust happens when governments aren't very much on the front foot about that level of transparency of saying, so we are going to change this process and this is where AI is going to be included in this process. this is what this will mean this is how your data will be used and this is how you will then be able to engage with a human and continue to appeal or question decisions when they are made about you and I think and in a number of countries around the world my own home in Australia we've had our own situations with governments where citizens have been on the receiving end of decisions or outcomes that have been made by AI and have struggled to know how to appeal those decisions and have felt, in some instances justifiably, that the decisions that were made had a real negative impact on them and on their families And so those early negative experiences have again understandably created a public discourse about the misuse of AI in these systems and in these decisions And, of course, no citizen has the option to opt out of engaging with government in the way that we might as customers. So this brings up another question I have for you. If government just needs to explain better when AI is being used, when humans are being used, what data is being used, why haven't they done it? Is it because they don't know yet exactly how to leverage AI? Or is it they're afraid that their constituents will not like it? They're afraid of being reelected? You know, there's all these. So what do you think is the crux behind it? I think a lot of it comes down to the risk appetites that a lot of our governments have to lean into conversations where they don't yet have all the answers. But, of course, in this context, in our environments, no one yet has all the answers. Exactly. And as long as governments wait until they have a complete 100% accurate data set to speak to, they're going to be waiting for a very, very long time. That doesn't yet exist. And so I think a lot of both elected and unelected officials are reluctant to go on the public record until they feel themselves 100% confident about what it is they're going to say, without even leading with that as a statement of fact, you know, in terms of saying this whole domain is evolving incredibly quickly. But with that said, these are the actions that this government is going to take today in the interests of improving fairness, equity, speed, efficiency, all of the benefits that could accrue to citizen delivery as part of more proactive engagement with AI. But as I said, I think there is real reluctance for people to go on the public record until they feel 100% confident and feel that they have all of the information at their fingertips, which of course means that the technology is getting away from them. Yeah, yeah, exactly. So this is a problem. I mean, how do we solve this problem? Because government's not willing to, because of all those risks, right? I don't want to say the wrong thing because if I do, you know, it's public record and everyone will say I'm an idiot or they'll say you lied to us. There's so many excuses, right? So what are we going to do? Because you're right. AI is moving forward with or without government. So they're falling further and further behind. So what do I do? Yeah, absolutely. What kind of solution? Yeah. So I think governments that are doing really well with this around the world and where there is a high level of citizen engagement with government AI have started small. So I feel like on low risk areas, but where there is a clear public service need and it might just be around speed of processing or it might just be around improving citizens. and interaction with government service delivery agencies. So they start at the point where there is a really clear exam question to answer. There is clear tension in the system already. They are very clear publicly about what they are doing to remedy that particular pain point. They build human oversight into it by design. They are very clear about the fact that they are doing that. They are very clear in plain English about explaining how citizens will have recourse to appeal if they so wish. They make sure that the data is sovereign. So it is the data is kept on shore. So to help mitigate people's concerns about how and where in the world their own data might might be showing up. And then as and when there are successes or lessons learned, they are really clear about that as well. So they will start with pilots. They will then have those pilots independently and expertly audited. And they are then really public about the findings from those audits and where on the basis of that they are going to be doing better. And those governments have found that a very public and transparent disclosure, even where there are early challenges, are so much stronger at building citizen trust than, you know, silent back of house failure that is ultimately picked up, you know, by a leak somewhere or there's something in the media. and failure in uh sort of in silence and be and behind closed doors and in a black box undermines public trust so fast so so what you're what i'm hearing you say is it's better to be transparent and show the world the mistakes because there's a little bit more forgiveness in in doing that than just trying to trying to not be transparent and do things kind of under the cover or sweep things under the rug as as we say right uh when the ai fails or when you know and instead it's almost like a journey together hey let's try this new thing out together as a government and the citizenry um is is that is that am i getting that right that's that's yep that that's exactly right that's exactly right and of course you know as as with all of these things to start small with small proofs of concepts, small sandboxes, and to ensure that there is always human oversight involved in decision-making, where those decisions have a direct bearing on citizens' receipt or experience of government services. So to start with, if you like AI playing to its strengths around, you know, system-wide data analytics and use that to, you know, provide for fairer, faster citizen service delivery in the first instance. And for therefore both citizens, communities to have the opportunity to build up their own understanding and muscle in engaging with governments in this way, at the same time as governments have the opportunity to upskill their own muscle around governance and workforce upskilling and creating auditability and obviously highly transparent public auditability in these systems as well. And the governance piece, I think, is hugely important. Darren. I'm glad you brought up the workforce and governance around all of this, because to me, that the other aspect that we haven talked about yet which is the government workers they worried about their jobs right Am Um it might be an AI overlord now because I need to oversee AI doing my job I mean we've heard this before when we outsourced it to, you know, emerging economies. Um, when, and I remember this in the nineties, people were, Oh, we're sending you to, uh, India to go train your replacements well well thank you very much you know do you how do i mean that's another aspect of this adoption and the governance around that what do we do in this case because government has to has to adopt ai is that the efficiencies are there we know we see that or at least i think we see that um yeah so how do i because i i've i've got two i've got several aspect dimensions on this and this is just the second one we're talking about. Yeah absolutely and look the the work the workforce element is is absolutely right and I think we're going to see that heads of HR and chief people officers across not just public but private sector organizations as well actually sort of you know building in or acknowledging agents within sort of, you know, within the workforce and having agents as team members in delivering, you know, goods and services regardless of where we look in the economy. One of the characteristics of the workforces in most developed countries, though, is that the government and public sector workforce is, if you like, aging out faster than is true of the population more broadly. So we are approaching almost peak government and public sector workforce headcount at the same time as populations around the world are continuing to grow. So the work that government and public sector agencies need to do is expanding at the same time as the human composition of those workforces is in fact contracting. So there are a couple of things here. One, there is a genuine need for better, faster, more efficient ways for governments to deliver their existing services to citizens and I think AI absolutely has to be part of that. And then there is the related piece of also upskilling the existing workforce to be able to better deploy AI in the service of citizens as well. And so that piece around upskilling is something that needs to happen in real time. So like in the language of the education sector to talk very much about work integrated learning, because of course we can't take whole swathes of employees offline all at once to go and do like 12 months of AI familiarization and upskilling and learning and training and all of that in the way that we would have done in previous generations. Instead, we need this cohort of workers to be developing their skills pretty much in real time and that come every Friday, they are more skilled at using AI than they were on Monday morning. And so, again, one of the jobs of chief people, officers and the like is to design for that within the construct of every work week and to make it easy for the workforce to develop these skills. And, of course, it's agents themselves. It's the AI itself that will actually help for this to happen. So in that way, the AI is kind of both the topic or the subject and the enabler of becoming familiar with the content. But all of that needs to be happening in real time. And it's only by developing that workforce muscle that those public servants or civil servants themselves then have the skills that they need to be able to regulate and create the policy framework within which AI is going to be deployed across the rest of the economy as well. And of course, that's one of the real differences in government and public sector workforces compared to commercial workforces. The public sector both has to grip up on this and use it in the service of all citizens at the same time as it is also responsible for creating the legislative and regulatory and policy environment in which AI is going to be deployed more broadly. So it has two hats to wear in this context and they are two massive hats and it's needing to wear them both at the same time. so that's really interesting because it sounds to me like government workers are going to have to have some very unique skills that maybe they don't have now not just around ai but around critical thinking around um i hate to say predicting a future but predicting the impact of new technologies coming in, and a lot more than they have in the past, where we just threw people at problems in the past in government, or just hire a bunch of people to help process things. AI can do a lot of that stuff. So the job skills are going to be very different in the future moving forward. That's what it sounds like to me. Does that sound right? I think that's exactly right, Darren. Critical thinking, absolutely. And something that I am seeing more and more is thinking around ethics and moral judgment and philosophy, if you like, and what are the roles and responsibilities of humans and humanity? and in the public sector context, how do we make sure that our nation or national values are part of the decision-making frameworks that are being designed into the work of agents so that there is the, if you like, the moral or ethical or national values foundation that as a citizen, I would expect my government to be taking into account when it makes decisions about rights, access, fairness, services, all of those sorts of things, but which of course, LLMs on their own don't innately have. So, you know, we are seeing some examples around the world of sovereign AI being developed, like Sea Lion. I don't know if you've heard of Sea Lion. That's the Singapore government's LLM that has developed, yeah, which explicitly is designed to accommodate the LAW law, the LORE law, and the 34 language groups that live within Singapore. And I think citizens have that expectation of the moral parameters that their governments will bring in terms of their broader public engagement Do you think because LLMs tend to be they can be creative but when you constrain them with moral guidelines and with rules and constraints, like what they've done in Singapore, they tend to be more resilient to political pressure and change that we see in countries all over the world, right? So this is an interesting philosophical question. Yeah. But do you see that maybe LLMs will keep the morals of a country and their values of a country kind of be the gatekeeper of those instead of it shifting like we've seen all over the world, you know the the the country you know shifts back and forth it does that swinging pendulum between you know the political parties or between different moral values as as it goes through do you think that will happen that it will be almost like a dampener this is a philosophical question it's it's a huge philosophical question i think the potential is there if that is what they are designed to do. And as we know, AI will seek to optimize what it has been designed to do. And if that is how it is designed, if that is the ask that we have of it and we set it up specifically to do that, then I do think that is a function that it could help serve. And I think that then raises interesting questions about how AI is then audited in the future as well. So to make sure that audit is not just on the data, data sets that are being used, but on the other design elements and the ongoing veracity and correctness and completeness of those relative to the original design brief, so to speak. Yeah, this is really, I mean, this is really interesting because we all know history shifts over time. Yes. And they always say the conqueror writes history, right? The winner of a war writes the history. But things shift over time. I have two daughters that studied history in college. And it's interesting listening to their perspective because they've studied both sides of it. So this is going to be an interesting thing on how AI has a play in government. and that we've got to figure out as a society because right now it's kind of up in the air. I completely agree with you, Darren. And for me, that's almost the biggest risk of governments, if you like, almost being a bit absent right now from the public square discussions on AI. If government's not in the discussion, then government values aren't shaping the discussion as well. That's right. That's exactly right. That's exactly right. So my view as a private citizen is that our governments actually have a duty of care to be involved in the discussion and to be thinking and acting perhaps a lot faster than they have both about minimum expectations of AI providers within their countries around what they need and expect of sovereign AI. and also about their application of AI within their own agencies. Because I think that they can't regulate or create policy from the sidelines. They actually need to be in the environment and in the ecosystem to be able to effectively understand, support, and regulate it in a way that is consistent with society's values. now this this is gonna this is gonna be interesting to watch this from the over the next five years maybe even less than that it's moving so quickly so hey katherine if people want to find out more about um about this topic about um you know uh what you how do they find out more uh about you and what you do and and uh government and ai So all I would need to do is to Google EY and either my name, Catherine Friday, or EY government. And there is a huge amount of information that will immediately spring up to them. What might be of most interest to your listeners, though, Darren, is a report that EY released in June of last year, in June 2025. and it was on the topic of how data and analytics and AI can transform government service delivery and create public value. And that report is publicly available and then has links within it to all sorts of other related thinking, both from EY but then also from entities like the OECD. And, you know, we obviously highlight their thinking about the role of government in combating mis- and disinformation and one of the obligations of governments around creating public governance for digital democracy. And these being the sorts of ideas that the OECD is sponsoring and advocating for and with which we agree and say, again, government's role is to be active within the ecosystem and to recognise where it needs to show up to support citizens as they themselves grapple with AI. And each citizen will come to their government engagement with varying, you know, I guess varying postures, you know, on the trust spectrum between, you know, very happily trust my government and believe what it tells me to maybe a position of mistrust in the middle and then wild distrust at the end. And ultimately the responsibility of government is to engage with all of those citizens. So, you know what we'll do? we'll put on my website on embracing digital.org. We'll put links to, to those documents. Catherine, is that okay? That'd be perfect. Thank you. I'll, I'll flick them through after the call so you can get, please, please feel free to use them however you please. Yeah. Perfect. So to our listening guests, go, go out onto embracing digital.org. There'll be tons of content there on, on this episode. Catherine, again, thank you for coming on the show. It's been wonderful. Oh, Darren, it's been an absolute joy. Thank you so much for the chance. It's been really lovely to talk about these ideas with you. Thanks for listening to Embracing Digital Transformation. If you enjoyed today's conversation, give us five stars on your favorite podcasting app or on YouTube. It really helps others discover the show. If you want to go deeper, join our exclusive community at patreon.com slash embracing digital, where we share bonus content and you can always connect with other change makers like yourself. You can always find more resources at embracingdigital.org. Until next time, keep embracing the digital transformation.