The AI Policy Podcast

Jennifer Pahlka on Reforming Government for the AI Era

70 min
Feb 5, 20262 months ago
Listen to Episode
Summary

Jennifer Pahlka discusses how government reform requires fixing upstream policy and process blockers rather than just inserting technology, and how AI can help identify outdated regulations and simplify complex systems to improve service delivery and outcomes.

Insights
  • Technology insertion alone fails without addressing underlying policy, procurement, and process dysfunction—the real bottlenecks are structural, not technical
  • Most government rules aren't legally mandated but are interpretations that have rigidified over time; AI can help identify and challenge these misinterpretations
  • Government operates on a post-WWII industrial-era model applied to internet and AI-era problems; fundamental reform requires hiring, procedural, systems, and oversight changes
  • Efficiency improvements in government can make all constituencies better off in principle, but require shifting focus from process compliance to outcome measurement
  • AI's value in government isn't about automation but about helping public servants understand regulatory complexity and simplify systems for scalability
Trends
Shift from technology-centric to outcome-centric government reform frameworksGrowing recognition that civil service and procurement reform are prerequisites for effective AI adoption in governmentIncreased bipartisan interest in government capacity and workforce reform as gridlock limits other legislative optionsAI-enabled policy analysis tools emerging to help identify regulatory redundancy and conflicting requirementsTest-and-learn frameworks replacing waterfall development models in government digital servicesFocus on simplification and subtraction of accumulated regulatory cruft rather than addition of new processesRecognition that legacy systems (COBOL, assembly code) aren't the problem; complexity of rules they enforce isEmphasis on identifying and removing policy and process blockers before investing in new technologyGrowing awareness that government hiring and retention of technical talent is a critical national security issueMovement toward measuring government success by citizen outcomes rather than process compliance
Topics
Government Digital Service ReformCivil Service and Federal Hiring ReformAI Policy and GovernanceProcurement and Contracting ReformLegacy System ModernizationRegulatory Simplification and DeregulationUnemployment Insurance System ReformTest-and-Learn Frameworks in GovernmentPolicy and Process BottlenecksGovernment Capacity BuildingInteragency CoordinationOversight and Accountability ModelsTechnology ROI in GovernmentWorkforce Development in Federal AgenciesNational Security and Government Effectiveness
Companies
Code for America
Organization founded by Pahlka to bring user-centered technology practices to state and local governments
U.S. Digital Service
Federal unit Pahlka helped establish to transform how government builds and buys technology, inspired by UK's Governm...
Fathom
Organization that runs the Ashby Workshops, described as the most influential AI policy conference for professionals
Department of Defense
Referenced for cybersecurity policies, CAC issuance bottlenecks, and AI integration challenges
Department of Veterans Affairs
Example of government agency with complex legacy systems and process inefficiencies
Internal Revenue Service
Example of agency with outdated systems (assembly code IMF) and fax machine regulatory requirements
Centers for Medicare & Medicaid Services
Agency responsible for healthcare.gov launch that prompted USDS creation
Government Digital Service (UK)
British government digital service that inspired the creation of the U.S. Digital Service
Recoding America Fund
New organization founded by Pahlka focused on government reform and AI-enabled policy analysis
Defense Innovation Board
Board Pahlka served on for four years, advising on defense technology and innovation
Joint Artificial Intelligence Center
DoD organization where Gregory Allen worked on AI policy and encountered process bottlenecks
Niskanen Center
Think tank where Pahlka is a fellow, published work on government procedure and regulation
People
Jennifer Pahlka
Former U.S. Deputy CTO, founder of Code for America and USDS, author of Recoding America, now leads Recoding America ...
Gregory Allen
Host of AI Policy Podcast, former White House intern, worked at DoD Joint AI Center on policy reform
Todd Park
Former White House CTO who recruited Pahlka to help establish the U.S. Digital Service
Tim O'Reilly
Technology entrepreneur and Pahlka's husband; coined term 'government as a platform'
Barack Obama
Former president whose administration created USDS and appointed Pahlka as Deputy CTO
Marina Nitsa
Co-author of Hack Your Bureaucracy, featured in Pahlka's book, government reform expert
Nick Sinai
Co-author of Hack Your Bureaucracy, government reform and digital service expert
Nick Bagley
Law professor who wrote 'The Procedure Fetish' on government's obsession with process
Mark Dunkelman
Author of Why Nothing Works, examines consequences of over-proceduralization in society
Marianne Bilotti
Author of Kill It with Fire on modernizing legacy IT systems in government
Ezra Klein
Co-author of Abundance, praised Pahlka's Recoding America book
Derek Thompson
Co-author of Abundance, praised Pahlka's Recoding America book
Ed Glazer
Speaker quoted on capacity eating policy for breakfast
Quotes
"Capacity eats policy for a light snack"
Ed Glazer (quoted by Jennifer Pahlka)~32:00
"We have sort of slapped websites on the front end of that and pretended that it's a fit for purpose for the internet era, but we didn't do the backend work to really update it. And now we are entering the AI era, having not solved the last problems."
Jennifer Pahlka~28:00
"The problem really is not so much that these state unemployment systems run COBOL code... The problem is the volume of rules and regulations that these systems have to comply with that make them complex."
Jennifer Pahlka~75:00
"I don't think it's that all public servants are just like slavishly following the rules and don't want to have this discussion. It's that it's like nearly impossible to figure out the answer to that question if you don't have an AI tool to help."
Jennifer Pahlka~60:00
"What matters is the outcomes. Yes. And maybe it is the case that the best way to get us to better outcomes is using AI for X. But like the focus, the northern star, the guiding light always has to be the outcomes."
Jennifer Pahlka~95:00
Full Transcript
Welcome back to the AI Policy Podcast. I'm Gregory Allen. And today we have a special privilege for this week's episode, because we are at the Ashby Workshops. This is a AI conference that is put on by the organization Fathom. And it came from kind of out of nowhere last year to suddenly be, I think it has a legitimate claim to be the most interesting and influential gathering of AI policy focused professionals. And that includes bringing a bunch of folks from industry, from former government officials, from folks in academic and NGO sector, and a lot of people working through a lot of different problems. And we are secluded away in Middleburg, Virginia, where we are forced to interact with each other. And I really do meet people at this conference in a way that I don't always meet new folks when I go to other conferences. So it's intellectually stimulating and it's also good for my networking. And one of the benefits of being at the Ashby workshops is that Jennifer Palka is here. And when the folks at Fathom asked if I wanted to do an episode of the podcast at the Ashby workshops, and if there was anybody on the attendee list who seemed like an interesting interview candidate, I immediately latched on to Jennifer Palka. So if you don't know who she is, it's still very likely that to some greater or lesser extent, her work has touched your life because it has certainly touched mine in big ways. So Jen was the U.S. Deputy Chief Technology Officer in the second Obama administration. She helped found the U.S. Digital Service, which we're going to talk with her about. She has a legendary TED Talk from 2012 that has more than 1 million views. And she was also the founder and former executive director of Code for America and also has a new organization, the Recoding America Fund, which I'm sure she's going to talk about. She, in recent years, has gotten a lot of attention for her fabulous book, Recoding America. If any of you out there have read the book Abundance by Ezra Klein and Derek Thompson, they celebrate this book for very good reason. So, Jen, thank you so much for coming on the AI Policy Podcast. It's great to be here. I'm very honored that you picked me. Yes. So I should say why I picked you, because it's not just your fancy list of titles. It's because I am one of those people who your work has touched, even though we're meeting, you know, for the first time here at Ashby. So to give a little bit of background there, I was a measly intern in the Obama White House while I was in graduate school. And the U.S. Digital Service was one of the most exciting things going on in the second Obama administration. I was in the White House Office of Science and Technology Policy, and there was so many interesting initiatives that were either directly related to USDS or downstream of USDS. And there was this entire philosophy of government reform that was, on the one hand, yes, the technology is awful and we need to fix it. But sometimes more technology is not the method of getting better technology because, as Jen will explain better than me, there's all these policy and process barriers. And that philosophy was something I took with me to when I went to the Department of Defense Joint Artificial Intelligence Center, thinking about policy and process blockers is just such an interesting and useful and I think just accurate way of thinking about helping government deliver. So now talking to the godmother of so many of these ideas in government reform, really glad to have you here. So I just talked a little bit about how your work has influenced me. But can you just sort of walk us through your own career journey, how you got started in the public sector, and how you sort of had your aha moments leading to these ideas connecting government performance, execution capacity, policy and process reform, and technology overhaul? Sure. I'll try to do that in less than 45 minutes. My first job out of college, actually, I worked for a child welfare agency. And I think that just lasted a year. I was a secretary at the front desk filing paperwork, lots of paperwork. Can you tell us what decade this was? This would have been 1992, 1992. So there's probably computers in your office, but probably also a typewriter somewhere if you're like a sleepy government agency. I'm sure there are people out there still struggling with Lotus 1-2-3. Yeah, yeah. Was it Word Perfect? And yeah, I was very good at those programs. But yeah, I ended up sort of long story short, kind of came back right at the beginning of the dot-com boom. I'd gone and traveled in Southeast Asia for a year, which was probably the most amazing thing I ever did. And I ended up working in the game business. I ran the Game Developers Conference for a couple of years and, well, for eight years and then quit that. When my daughter was born, I went back in to run a conference called the Web 2.0 conference. So, yes, that also dates me terribly. And everyone else who knows what Web 2.0 is. The blockchain people are claiming to be Web 3.0, but Web 2.0 was a very influential conference. The web 2.0, old web 2.0, people do not like that the web blockchain people claim that term. But kind of got, it got commercialized very quickly and went from being very interesting to, I think, a lot less interesting until the government people came along and were like, hey, this, you know, open data movement is exciting. What could we do? And, you know, the idea, you know, was, okay, what would gov 2.0 look like? And the first kind of versions of that were like, well, the EPA will be on Twitter or something. And there was a sense by a bunch of us working on the event that that was insufficiently substantial. And in fact, that there was what Tim O'Reilly, who is now my husband, called government as a platform. And we started the idea of an event called Gov2.0 and tried to sort of define the movement around the event and wanted it to be much more about applying a sort of lightweight user-centered ways of building technology to government, which, of course, really, you know, is the exact opposite of that. big, heavyweight, very long procurement and development cycles without ever talking to a user until the very end. So I got to, you know, we ran Gov2.0 for a while until I got the idea for Code for America and quit and started that organization, which worked at first with city and county governments to sort of bring that kind of style of development in to solve problems in like a much quicker, more user-centered kind of way. In the middle of running Code for America, I went to the federal government for a year to help start the U.S. Digital Service. I worked for Todd Park as the WCTO. And could you just talk a little bit about the inspiration for USDS? Like, what was the problem that you were trying to solve? If my memory serves, a lot of this was born out of the disastrous online platform launch of Obamacare, which had a massive day one website outage. And Obama's like, OK, how do we never have this happen again? And one of the answers to that question, correct me if I'm wrong, was USDS. Yeah, we didn't know that when, as we were trying to stand up USDS, that we were going to have this, let's say, disaster slash opportunity. It was just before Obama got reelected that Todd Park called me and said, I want you to come to Washington and help me run this presidential innovation fellows program, which was loosely modeled after the Code for America fellowship. Which was in the difference being Code for America was mostly about putting technologists in state and local. Yeah. And then you wanted to do that on a federal level. Well, he was already doing it. He'd done a year of it and wanted to sort of have someone to professionally run it so that he could he could focus on other things. turned out he didn't really know what he was going to be focusing on, but it was clearly healthcare.gov. And I didn't want to come to D.C. to do that, but I happened to be in London looking at this government digital service and being absolutely wowed by it. I mean, what they were doing was exactly what I thought we should be doing in the U.S. Now, there's lots of differences, so you can't exactly, you know, port it over, you know, with the exact same strategy, but they were really doing something more profound than just having like a fellow here and there. Like they were transforming how they built and bought technology. And I sort of pitched them on the idea of like, let's do this, not knowing that in fact, there were people already talking about sort of what they called at the time, Project X. And I sort of infused the idea of Project X with what I'd learned from my friends in the UK. And eventually he convinced me to come on the promise that we would start this unit that became the U.S. Digital Service in homage to GDS. And that was, I came in, I think, June 2013. So I was there for a couple months, really struggling with how hard it was to get anything done in government. Yes, I knew you'd nod when I said that. It's true. And that's when healthcare.gov launched and had, let's just say, a really difficult time the first couple of months. And sort of retroactively, we kind of made, you know, my boss, Todd, and others, you know, put together this team that came in and helped get the site back on track. And we kind of retroactively called that the first, you know, project of USDS, even though it had not launched. And in fact, didn't formally launch until slightly after I'd left. It was actually much harder to get it up and running. But I had taken a year leave of absence from Code for America and had to go back. So it sort of laid the foundation for it. But, you know, its origins really did speak to, I think, a persistent question that people have about these units. You know, are they there for firefighting or are they there to, you know, change practices more sustainably in the long term? Are they there to help, you know, do procurements better? Are they there to help in-source talent? You know, there's a whole bunch of different goals that a digital service team might have. And they can be connected. Like, very often, long-term sustainable change comes from a disaster where an institution like, in that case, CMS and HHS, have to adjust because the world is watching and they're under a lot of pressure. But there also have to be other theories of change about how we get institutions to change when they're not in crisis. Yeah. And so like in thinking about the aha moment for you, I think there was this sort of theory at the time, which was let's get a critical mass of talented technologists and just throw them in the government and throw them at big technology problems. And you have success stories from that era. But talk to me about how you sort of came to the shift in your emphasis from technology and technologist insertion to more policy and process reform. How did that happen for you? Yeah, I still very much believe that we need to get good technologists and product managers and designers into government and that that core capacity is such a key lever. Ed Glazer said this at an event that I was at the other day. It's like, you know, they say, you know, culture eats strategy. And he said capacity eats policy for a light snack. and just having the right people around, I really think, is this huge unlock that we need to keep doing. But we also just need to go upstream to the structural reasons that we are getting bad outcomes for the American people in the first place. And so, you know, it just it has felt like this constant like, OK, let me let me let me go two steps upstream to from, you know, the talent that we have to solve this problem to the procurement rules. Oh, wait, let me go further upstream to the ways in which these things are funded in the first place. They're kind of DOA, like, you know, they're done on arrival if they are funded as a big bang project that has no ability to do iterations and learn along the way. They're also on the back end overseen in the same way, that oversight is all holding them accountable to, you know, this big bang model that really has a terrible, terrible failure rate. And then going further upstream, it's like, what is the operating model that government is relying on in the first place? All those things derive from a basic sense of how we structure our government, our funding, our oversight, our talent. And it's, you know, if you really pull back the curtain, it is still the operating model from the post-World War II industrial era. We have sort of slapped websites on the front end of that and pretended that it's a fit for purpose for the internet era, but we didn't do the backend work to really update it. And now we are entering the AI era, having not solved the last problems. The bad news about that is we have a lot to do. The good news is I think we now get to kind of leapfrog into the AI era if we have the will to do it. Yeah. So like imagine that, you know, somebody out there in the audience is receptive to your message, but they like are not quite grokking what you mean when you say the upstream, you know, problems, procurement rule problems. Can you like walk us through some of the salient examples in your own experience where you're like, this should work, but X, or this should work, but Y. Just to give you an example from my own experience, in the Department of Defense, cybersecurity is a huge priority for very good reasons, But you can spend a lot of time and money on cybersecurity without achieving improved cybersecurity. And we do all the time. Yes, exactly. And one of the things that was very painful for me was when we had successfully recruited this hotshot computer science PhD out of Harvard. And he was so excited to come serve his country. And we install him at the Joint AI Center, which at that point was a very young organization. and he's like, great, you know, where can I install PyTorch? Where can I install TensorFlow? Where are your GPUs? And we're like, here is your Stone Age laptop and if you install any software on it at all, that's a fireable offense because it's a cybersecurity risk. And so we've got this incredible, you know, Ferrari of a brain and he's basically forbidden from using his skills in a meaningful way, certainly for a very long time because of like the cybersecurity paradigm, you know, that was adopted and the rules that were adopted. And so I'm just curious, like, what are examples of that for you and for the work that you and USDS were trying to accomplish, where you had good stuff, you had good people, but you encountered bad rules that made it impossible to do your work? Do you know the classic LinkedIn post, Fix Our Computers? Yes, yes. Yeah. We, you know, I think the presidential innovation fellows I worked with, a lot of them would speak to the same issue. We had one woman who worked at the VA who was on a one-year fellowship and didn't get issued a computer for a year until nine months in. So there's some pretty like, I don't want to like speak just to that level, but I do want to remind people very often, like we are really at that level. It's sort of one of the things I like tweeted at Elon Musk, you know, prior to Doge getting started, like, you know, you've got these fancy ideas about what to be fixed, like get them computers. Yeah, you're sparking in me this thing about how so often the government budgeting process when it comes to technology is like, to put it charitably, penny wise, pound foolish. So to give you an example from the DOD, there are these little cards that's your ID badge called the common access card, the CAC. And it's what you hold up to the door to get into the building. It's also what you put into your computer that allows you to log into any of your accounts. So it's really important to get one of these CACs. And the CAC issuing office was like horrifically backed up such that, you know, when I started and my initiative was a priority for the DOD, but it took me three weeks to get a common access card. So I like have no email accounts. I have very minimal ability to do many of the things that I was hired to do. And like, how does that happen? It happens because you have these budget overseers who are like, we need to save money. So we're going to cut 10% from every budget, including the CAC issuing office, not realizing that the CAC issuing office is the bottleneck for the productivity of every single new employee or military service member in the DoD. So they you know they saved 10 of the budget of this CAC issuing office which is like probably I don know a million million And they incurred these costs of hundreds of millions or billions of dollars in lost productivity across the entire organization And this is what I mean when like actually the cheapest approach to the CAC office is to be over capacity. Right. Like so that like half of the time people are sitting around doing nothing, waiting for those, you know, surge moments so that not a single employee ever has to worry about that as a gate. And I just find it's so impossible for the government to view return on investment in these terms. It's very hard to find someone who's looking at it. That system's lovely. But, you know, I'll also poke you on that a little bit. I mean, absolutely agree. And we've just this terrible inability to look at where the bottlenecks are. That's the problem in all of these big public failures. I'll talk about unemployment insurance in a second. But there's another way to solve that problem, which is to go look at the, I don't know, I'm going to guess 12, 15 steps that it takes to issue a CAC and say, why? Why do we have these? Like, is there a better way to issue this CAC? And, you know, we just don't do that. And if you very often, if you bring it up, what you hear is, no, all of these things are sacred. I must do these things in this order with this kind. I mean, if it took you three weeks, it's not that someone just like took your card and like, I mean, they didn't get to it. And but like they got it. Some people were signing off on it that didn't need to sign off. It was sitting in queues in places that simply could just be erased. But there's this real challenge in government when you say, like, pick one tiny part of that. Actually, I'm going to give you an example that I hope you understand is parallel. Like the new CIO at the Treasury who's working in IRS is going around asking, why do we have all these fax machines? And they say, because there's a whole bunch of kinds of information that by law, they tell him, by law, can only come in through fax. That is the somewhere in some regulation, it is presumed to be written that that is the most secure method. Of course, it was never the most secure method. And maybe for a minute in late 80s, somebody thought it was. But you get into this weird situation where to change it, you have to prove a negative, right? You have to prove that the regulation that says that this is the most secure method for this particular kind of information doesn't exist. They can't point you at it, but there's such a belief that this is actually the only legal way to do this thing. And there's probably a parallel in your, you know, process where it's like they firmly believe it has to be this way or, and I say this with great honor and respect for public servants, they believe it is their job to protect that process because it is protecting the American people. It is protecting the Department of Defense. It's protecting somebody from something bad. And if they don't protect it, like things are going to fall apart. But you could probably have increased the throughput of that CAC office with half the budget. Yeah, with half of it. You're probably right. Yeah. We actually talked about this briefly yesterday, but it's an analogy I often use in these situations, which is, you know, if you're trying to water your garden and there's a kink in the hose, you can solve that problem in two potential ways. One is you can massively crank up the water pressure, which is like more budget, more staff, you know, just fight your way through the kink. And the other thing you could look at is like unkinking the hose, which is sort of like, wait, there's a rule that says we have to do all this with fact machines. One way to think about your job is my job is to implement that rule come hell or high water. And sometimes that's great. Like if you're, if you're a nuclear safety officer on a nuclear submarine, shut up and follow the rule book. Right. Yeah. But there's another way of thinking about your job, which is like, okay, how do I get this rule changed? Because this rule is obvious nonsense. And how do I persuade the people who have the power to change the rule that we actually could do our job so much better if this wasn't it? And I want to be respectful of government civil servants who, you know, they're dealing with constantly changing political parties, political leadership, government priorities. And the country needs to know that these people are going to follow the law. So there's a culture of rule following that exists for a good reason. But this sort of shift from the religion about the process to focusing on the outcomes is so desperately needed in so many different walks of government. And, you know, an analogy that I heard you say in an interview somewhere else was, you know, you want desperate rule followers in like commercial air travel airline safety. But when we're dealing with like back office processes where efficiency and effectiveness really could be improved by taking a hard look at these rules, you know, how do you instill that alternative cultural and that alternative mindset? Yeah. Yeah, that's a concept from the Navy, this idea that you want different kinds of people in the front of the sub and the back of the sub. And it's really just a kind of behavior, right? In the back of the sub, you have to be very faithful to process, never deviate. You know, are you going to blow the thing up, right? In the front of the sub, you are navigating through waters you cannot see. You're going to have to improvise sometimes, and you just need a little bit of a – you need different behavior there. But, you know, I think there's a key thing that since we're here to talk about AI that I think AI brings to the table that is really critical to this behavior change that you're calling for. And that is people really don't know if the thing that is the thing have to be the way it is because of an actual law that Congress needs to change or because of an interpretation of that law that is absolutely changeable. And I think 90 percent of the time when you really pull back and investigate where that rule is coming from, not only is the law not actually say, for instance, that like this information can only come in through facts. There's nowhere in the statute that it says that, I guarantee you. But in fact, the law often says something kind of the opposite. Right. And what's happened over time is that that's been interpreted and that it's gotten more and more rigid as it's sort of fallen through the hierarchy to the point where what is now practiced is inconsistent with what the law says. So like I'm kind of making it up in your example and imagining what that might look like. But like, let's say the law said, use the most secure method available. And then, you know, some deputy undersecretary of blah, blah, blah, blah in 1985 wrote a memo that says, in almost all cases, faxes will be the most secure thing. And now like that stupid memo that some dude wrote in 1985 has like sort of taken on this folklore as tablets from down on a mountain. When in reality, like you just write a new memo that says actually the law says we have to use the most secure method. And that memo is obsolete in a massive way. I think about federal hiring. It's the same thing, right? Like if you actually look at the statutes and, you know, in Title V, it all says pretty reasonable stuff. If you read the merit system protection principles, they all say very reasonable stuff. But the way hiring actually works is not consistent with that. If it's meant to be a process that gets the best candidate on base, you know, you're supposed to, like, examine people. Well, they've created so much process that it's, on a practical basis, almost impossible to actually assess people for their skills. Yeah. And so we do these self-assessments, let people rate themselves, pick the top of that, and then apply veterans' preference to that pool that has said that they are perfect in every way. So you've selected for people who either in a charitable way now had to play the game in a less charitable interpretation, have very high opinions of themselves. You apply veterans' preference, and that's how you down-select a large candidate pool. And, you know, in fact, if you look at the statute on veterans preference, it says that the preference only applies after a qualified exam. Wow. Right? I'm literally learning this for the first time. Yeah. I never heard this in any conversation I had. Look it up. You know, on a different day, I could cite the actual, you know, the reference. But it really says that after passing a qualified assessment, then they get extra points. But these people aren't actually. Wow. But if you go into any HR office, they will tell you this is the only way that we can do this hiring process that is consistent with the law. And my assessment is that it's actually inconsistent with the law. But it is cascade. I call this thing the cascade of rigidity. As it's gone from what lawmakers at a very high level sort of stated their intent, as it's gone down through the hierarchy and been implemented, It's become more and more rigid at every step and gotten sort of locked into this practice is what we think is consistent with the law. And in fact, what it does is gets perverse outcomes. So what you, you know, 90 percent, and this is hopefully changing with the Chance to Compete Act, which passed at the end of 2024. But 90% of hires in, you know, regular competitive hires in government, not, you know, special, not accepted service, rely entirely on self-assessment. So there's no independent assessment of their skills. 90%. Half of all hiring actions just get thrown out because the hiring manager gets this list and throws it. And it's like no one on here is qualified and throws it out. They know the process did not select for the skills they're looking for. They throw it out and try some other method of getting the person that they need. So that's what I mean by sort of being consistent. But the point about AI is that AI is an incredible tool to help us determine, is this thing that we're being told must be this way because the law says it needs to be that way, actually deriving from law that we need Congress to go change? Or can somebody in a regulatory role or in a policymaking role or actually just a manager change this without any fear that we have actually broke? I do want public servants to follow the law. I just think very often we are following practice under the guise of law. And I'm very excited about AI's ability to just empower every public servant to ask that question so that they can. I mean, I don't think it's that all public servants are just like slavishly following the rules and don't want to have this discussion. It's that it's like nearly impossible to figure out the answer to that question if you don't have an AI tool to help. You know, I don't think we have the full set of AI tools to do that yet. That's part of what I'm hoping we can do through my new effort at the Recoding America Fund is just like put those tools in the hands of anybody who is in a bureaucracy who says there's a better way to do this. I need to know how to change it in a way that is still consistent with the laws of the land, but more consistent with the outcomes that the American public expect us to deliver. Yeah. So to like to bring it back to your Veterans Affairs example with fax machines. Oh, that was the IRS. Oh, sorry. IRS. My mistake. So your IRS example with fax machines. If I am an employee who has noticed that like these fax machines are making my life worse in a million different ways. And I go to my, I don't know, supervisor and say, can't we like not use fax machines? And they say, nope, sorry. That's, you know, what the law says. previously, you know, my alternative was, okay, I need to undertake like a six-week research project to read the 10,000 pages of regulations and try and find where this fax machine order originates from. And now maybe I have this alternative. Longer than six weeks is what I would do. Yeah. Now maybe I, as a staffer, have this alternative option available to me, which is a large language model-based qualitative search of the relevant laws and policy documentation. And so we can not only find out, like, are these orders incorrect, but also like, who is the official who I could conceivably get to officially declare that this is, you know, an incorrect interpretation of the rules, etc. So I love what you're saying about how like AI and language models in particular are a really exciting tool for policy reform because one of the biggest challenges of policy and process is it's so gargantuan and unwieldy. And even the fastest reader on earth only reads a thousand words a minute. And there's like infinity words, you know, to be read. And so that really is a large language model sized problem. Well, I think it's not just that we want – yes, and I hope we start doing that a lot more often. Again, I think we need to get the tool set there and get it in everyone's hands. The other thing that AI should be good at is helping us simplify those. We have just layers and layers of accreted and accumulated policy and process and regulatory and, you know, cruft, essentially. It's all sort of indistinguishable. We need to be able to distinguish it. But we also need to go back and say, OK, if you look at unemployment insurance, for example, which did not do well in the pandemic. I mean, a lot of people got paid, but a lot of people felt very bait and switched by the promise that they would get their benefit and the long, long delays in getting it to them. The problem really is not so much that these state unemployment systems run COBOL code. In fact, if you really went, we did in California and others went in other states and looked, the COBOL code generally was like, yeah, it's a little hard to adapt it quickly because it's not, it's, you know, it's just slower to work with COBOL code than more modern frameworks. But it was chugging along. Like, you know, it's an old machine that actually like is very, very stable and can turn out like enormous numbers of, you know, processing like tons and tons of claims really quickly. The problem wasn't the COBOL code. It's the volume of rules and regulations that these systems have to comply with that make them complex. So in New Jersey, where I ended up working a little bit with the labor commissioner there, when he was called in front of the state legislature and yelled at, as they all were, about the backlogs, he brought these boxes and put them on the table. And they were labeled 7,119 pages of active UI regs in the state of New Jersey. And he kept saying, you want us to be able to scale up, you know, 10, 15, 20x when an emergency hits and then scale back down and still be able to, you know, operate at a reasonable cost when we have, you know, far fewer claimants? It has to be less complex. Now, imagine putting what, like a couple of legislative assistants or, you know, people from legislative analyst office or whatever on 7,000 pages? There's not going to happen. Like, there's just not a practical path from the resources that we have to a problem of that size without AI tools. But imagine, I mean, it's a lot of work to load that all up into a rag or whatever. I mean, I think maybe a rag is not the answer anymore, but like, and then start querying it. Now, it's not going to give, it's not going to spit out, oh, here's what the regulation should look like. It should only be 100 pages. It's not going to do that, nor should we expect it to do that. But it can actually help you understand what in there is vestigial, what is conflicting, where does stuff come from, you know, what choices and trade-offs might we make in the service of something that is far simpler and therefore far more scalable. If scalability is what we want, we have to do the simplification work. That's so interesting. Okay, so you've got two use cases for AI that you're really excited about. One is noticing that the rules are being misinterpreted and empowering employees to sort of go back to the original source and do it. And then this other thing about simplifying the rules, which I mean, thinking just about your unemployment insurance in New Jersey example, I'm literally trying to think like you would probably have to create like a state commission for unemployment insurance reform that would work for like a year with a massive staff going through this and being like, okay, this is bad and we should get rid of it. This is good, but implemented poorly. And here at the end of this journey is like the 250-page regulation that we actually want. That is, you know, the digest, simplification, reform, and improvement of that 7,000-page document. And it's an insanely unwieldy task. And you can imagine that a smaller group of folks empowered with AI could actually move the needle in a relevant kind of a way here And because the current system is so inefficient I mean, one of the things that economists in like undergraduate college courses love to say is that the nice thing about economic growth is at least in principle, it can make everyone better off. And like the nice thing about efficiency improvements in government is like, in principle, it can make everyone better off. I mean, in reality, there probably will be winners and losers in most of these kinds of outcomes that will be politically contentious. But at least in principle, if we can get this kind of efficiency, we can make so many different people, so many different constituencies better off. So you're plowing down the path that I wanted to go, which was from your background to your experience with digital government type reform to now thinking about AI and government. And before we go on to additional AI use cases, which I'm really enjoying this part of the conversation, I want to just highlight that like so many of the things that when I was in the Department of Defense made doing AI in government hard had nothing to do with AI. They had everything to do with this sort of the same sort of problems that you were encountering in trying to improve digital services within government, which were like policy and process blockers. I often joke, you know, I was a think tank nerd before I went into government. And I would write papers saying, you know, the government needs more AI, the government needs more AI. And then I got to government and it was so clear that like, we need workforce reform. We need procurement reform. We need all these things that not just make it hard to do AI, but make it hard to do anything. and that's why I found your work so inspirational and relevant. Yeah. And so as you're now expanding, you know, your focus to include this new work on AI, you've testified before the Senate on this topic. You've written a number of pieces on this topic. You've talked about sort of the key use cases where you feel like AI as a tool can move the needle in accelerating reform. Are there other ways that you think about the sort of intersection of AI in your work? Yeah, AI is both a tool to achieve all of the sort of fundamental reforms that I'm now focused on. And thanks for the tee up on that. But also the force that is changing what government needs to do to meet the needs of people. Right. If that's the fundamentally the job of government, we're supposed to meet people's needs and respond to, you know. respond to a changing environment. We can't just say like, oh, let's use AI to help us make this thing that we do now a little bit better when the thing we are doing now may be the wrong thing. It may already be the wrong thing. I mean, we're just talking about unemployment insurance as a reasonable example in this case. Like, EY is a pretty good tool for helping people through the pandemic because it was a temporary disruption. The disruption we're about to, we're starting to see, well, you know, if you lose your job to AI, it is much less likely that you are going to get it back than when you were temporarily displaced because of the shutdown, because of, you know, shelter in place orders from the pandemic. And so that is just like what the program was designed to do. It was designed to meet the short-term needs. Now, if we're getting really good at retraining people, maybe it's going to work, but we actually should be thinking in a bigger sense, What is AI going to do to us that makes us not just say, how do I deliver unemployment insurance with, you know, better fidelity and timeliness and all of these things? But are these the right programs to be delivering? And I think that it's true in the national security world in a big, big sense. You know, we keep trying to deliver on what we needed in the last paradigm, not in this paradigm. We get these, you know, literal ships that like not only don't work by the time they deliver, we don't even need them by the time they're delivered. Yeah. Because we're not fighting that kind of war anymore. No, I certainly encountered this. The version of it that I encountered was often, OK, AI is going to transform warfare. How do we put it on the F-35, right? As opposed to asking if AI transforms warfare, are F-35s the kinds of things that we're going to want to buy? I mean, I don't come with like a strong prejudice that the answer is no. But my point being that like that's the right question to ask. You got to ask the question. You got to ask the right question. Like, is is that it's not about, you know, making what we're good at better. I mean, sometimes it's going to be bad, but other times it's going to be like, we don't need to be good at that anymore. We need to be good at this other thing that is going to be more relevant to the challenges we're going to face in the future. That's exactly right. And when people talk about like upskilling and re-skilling for an AI workforce in the federal government, I think very often we are missing that piece of it. It's like the assumption is we're teaching people to do the same job they're doing today, but now it's got AI sprinkled on top of it instead of, oh, no, they're going to have to, we're going to have to be simply doing very different things to get very different outcomes to like meet people's needs. But, you know, let me speak to this. You brought up sort of what are the fundamental things that need to change that we sort of jokingly all now put under the banner of AI enabling because that's sort of what sells now. Yeah, you can get a senator or a congressperson to take a meeting on AI enablers. It's much harder to get them to take a meeting called, dear God, you know, workforce reform. Exactly. It's so desperately needed. Though I will say they're more interested in things like workforce reform now because there's sort of less that – there's such a gridlock. There's a lot of things they can't do. But my current thesis that informs the fund that I helped start is that if you want government that can – And this is the Recoding America Fund. Recoding America, yeah. It's launched very recently, yeah. If you want government that can actually achieve its policy goals, then it needs to have four things. It needs to be able to hire the right people. And our system hires many wonderful people, but is a very broken system. Oh, yeah. To cite another example, one of the guys who was most important in the early AI, you know, steps that the intelligence community was taking took three years to hire that person. Yeah. Because of security clearances and because of every other thing. So what a miracle that this person, you know, was in a life circumstance where he could say, yes, I will take that job. And yes, I can wait as long as you need me to wait to go do that stuff. I mean, like that cannot be how the system works. And you cannot rely on these sort of miraculous circumstances. You want to attract the best and the brightest. By the time you get that guy in, like the needs have changed. I'll give you another example, though. I want to rattle off the other three. But I read this in a paper. I don't have any personal experience with it. But I've been long focused on the individual master file at the IRS. It's written in assembly code. Again, sort of like COBOL, I'm less worried about the IMF sort of falling over as I am about just like there are people who know that particular assembly code. Like it's just it's like very few people in the world. So the very long time they've been trying to move like, you know, port that to something that, you know, has more of a workforce base for it. and they had this guy in who was working on it and was about to be able to do his thing, but he was on, you know, one of these four-year term things or whatever, and he kept saying to the HR people, like, my thing is up, my thing is up, and they didn't do anything about it. He turned out, and they, like, they're like, oh, sorry, like, we can't have you anymore, like, just as they were about to start the process. Oh, my goodness. You know, leadership intervened and got him reinstated a year and a half later, at which point he had taken another job and moved on. Wow. And it was like, you know, because I was like, how is it that we have not, it was just funny for me because I kept on like, it's sort of crazy that we haven't shored up this like incredible vulnerability in like the thing that brings in our revenue, right? And in fact, we've tried many, many times. And I kind of felt like, you know, did you watch, what was the Wanda show? In the end, it's like it was Agatha all along. It was workforce all along, every time. It's HR systems. Like any persistent problem that you have, you scrape a little further down and it's HR problems that are keeping us from being able to do what we need to do. So, yeah, we have to fix those things because it's just fundamental to everything else. And I think, you know, you obviously serve in a democratic administration. But if I could highlight sort of the bipartisan of this issue, you know, Democrats historically have been supported by government unions and are so in favor of worker protections. And sometimes there's a great reason for that. Right. You you you cannot afford to pay nuclear engineers the amount of value that they deliver to our country. So you at least should be able to give them stability and predictability, right, that you can understand why they're this sort of thing. You also want to make these folks protected from inappropriate political winds of change and potential corrupting influences. So we give them job security. But the sort of flip side is we're saying, well, if we're going to give these people these incredibly safe jobs, they must go through this infinitely long checklist of stuff before we would be willing to give away such a fabulous prize. And so, you know, what you end up with is not the people who are, at least in all circumstances, deserving of the kind of trust represented by those job protections, but the people who are willing to endure this absurd gauntlet to go through the hiring process. And I do think that that's a point where, you know, Democrats and Republicans have – and this is a point that Ezra Klein and Derek Thompson make in their book quite persuasively – Democrats and Republicans have both at different times and for different reasons found political strategies attractive that really make government less effective with lower capacity. And I don't want this to be perceived as like a partisan issue, which it can be framed as. But the reality is like both parties have been a source of these problems. Both parties have been a source of them. And I think both parties are now very curious about how to solve them in slightly different ways, but in ways that have enough overlap that it makes me very optimistic. And the work we're doing is profoundly bipartisan. You know, our board and our staff come from both sides of the aisle. And we work with folks on we work with we're we're actually quite, I think, big fans of the new head of Office of Personnel Management. We're working with Republican and Democratic members of Congress to get them excited about this. So, yeah, it's very much a bipartisan issue, even though there'll be some differences. And it's not like there'll be some politics there. So I've completely derailed you because you had the four things you wanted to go on. Well, I'm glad you mentioned unions because I do think it's important to point out that, like, I'm going to talk about the other three pieces of it. And I think the unions ought to care about all of this, right? Like, we need to think of it a little bit holistically. So if you want government that can achieve its policy goals, you have to have the right people. So we have to do civil service reform. They need to be focused on the right work, which means we need to do procedural reforms of the kind I was, you know, that's the kind we were just talking about. Like, how do you go in and say, like, is this necessary? Why are we doing it this way? what would be the right way to do it in 2025 or 20, oh, I'm sorry, it's 2026 now. In 2030, you know, what's going to be the right way to do this? One of the examples of that right now is I'm not a fan of the Paperwork Production Act or what I always call the comically misnamed Paperwork Production Act because it creates enormous amounts of paperwork. It's a discrete thing that we can tackle, you know, and actually just like make a lot of people's lives in government just a little bit easier so that they can focus on the meaningful work that gets the outcomes that the American people want instead of filling out just a bunch of paperwork that doesn't really get us anything better. So I'm hoping that a wide variety of stakeholders sort of see why that's valuable on, you know, everywhere along the political spectrum. And I think there is broad support. The third thing is that they need purpose-fit systems, and that is about how we're forming how we build and buy technology, not just in the ways that, you know, that I think you brought up this issue that I think is exactly right of like people see, okay, well, we should spend more on technology as in, you know, like just crank up the water pressure when there's a kink in the hose. This for us is very much about unkinking the hose rather than, and in fact, I think in many cases, You will water that garden a lot better with less water, less pressure, just like unkink the hose and get a nice steady drip going. The metaphor works on so many levels. So that's the first three. And the last thing is we need people to be able to operate or these systems to operate and test and learn frameworks. And that is fundamentally about changing the way we do oversight and the relationship between the executive and legislative branch. but we need to be able to say, this is what we want. Let's try this thing. Is it working? How do we adjust? Let's go back. Okay. And do that in an ongoing way. That's how you get to the outcomes that you want. That's true all the way back up the stream of my three other pillars, right? That's true when you're building technology. It's true when you're making procedures and process that are fit to the purpose. And it's true when you're hiring people. But that is really more the fundamental thing that when I talk about having a essentially industrial era model in our heads in government, the test and learn cycles were very appropriate for the internet age. They are even more appropriate for the AI era where the technology itself is in a certain way unstable. You have to continue to work with it and test it and, like, see its boundaries and make, you know. There's a whole thing in government with technology where you're supposed to sort of, you know, build, build, build, never talking to any users. And then, like, launch and test and, like, you know, clean up for two weeks. And, like, then your testing is over. And, like, your testing should never be over. Yeah. And if you're using AI, good Lord, please don't ever let your testing be over. You should be constantly monitoring and improving these systems. Well, I think I want to dwell on your test and learn thing for a moment here, and then I want to make a higher level comment. But I do think that the government mindset is often best understood through the lens of airline and airplane safety. Yeah. Right. And flight worthiness certification. because that really is a moment where it's like, okay, in 1995, we finished the software for the Boeing 747. This software is officially safe. So don't you ever touch it because if you touch it, you might change something that now makes it unsafe. So they have a very, very much a mindset of like, that's correct. Set it, get it right and forget it. And in airline safety, there's a lot of wisdom to that approach, right? But when you're talking about delivering digital services, like how do I sign up for health insurance on healthcare.gov? How do I sign up for my veterans benefits? How do I enlist in the army, like set it and forget it is a recipe for catastrophe. Because the reality of the world is, as you said, not static, you gave the example of the New Jersey unemployment insurance folks who basically say, like, I have to implement the same process, whether we're in the middle of a pandemic and a decent chunk of the entire state went from not needing insurance to needing insurance. And then I have to scale up for that. And then I have to scale down to the sort of routine level of stuff. And I don't have anywhere near the flexibility in my system or processes to accommodate that kind of a thing. And only the test and learn approach, right, can get you anything other than that. There's also a... Can I jump in on it for a second, though? So the thing is, we don't set it and forget it. What we do is set it and then let it accrete. So like if you, you know, one of the – Like set it and forget it would be ideal. Well, I mean, I don't think that's ideal either, but that also isn't totally accurate to what we do. We add process and procedure in a way that is sort of like – some people have called it sort of just continuous layers of paint until the thing cracks. Others talk about it in terms of, I mean, I talked about it in my book, in terms of archaeological layers. Like you can go back in these systems look at the VA look at unemployment insurance If you sort of like took a slice down through the you know it literally looks like okay that came in in 1970s like that came in in the 60s And you know not just the technology, but the process that have accrued. So like one of the reasons unemployment insurance is actually in a worse place than some of the other benefits is that it comes from the 1935 social security after 1939 35 anyway it's it's 90 it's 90 years old like and it's 90 years of adding and never subtracting and so like we just we we know how to add we don't know how to subtract and that is the skill that we need to learn to start making these systems um much more stable and uh scalable yeah so in your in your four-part you know hierarchy people procedural forms, purpose-fit systems, test and learn frameworks. I think one of the most important parts of it that I hope people understand is you are a person who has devoted your life to technology. You're incredibly optimistic about what technology can do to empower the government to deliver the services that our citizens need and expect, everything from social security to national defense. And what's amazing is in that four-part hierarchy, I never once heard anything even remotely approaching technological fetishization, right? Like you're not saying like, and we need to use augmented reality because that's the latest and greatest thing. Or like the crisis is, you know, COBOL. In fact, even though that was a trope I encountered frequently when I was in government, you said the opposite. You're like, well, if the COBOL is still working, you know, that might not be their most value added return on investment project to, you know, get rid of the cobalt. I mean, you don't love newness for newness's sake. You care about outcomes. You care about effectiveness. And whatever technology, whether old or new or weird or common, like that's what you care about. It's like what is going to deliver these outcomes. And I think that is so different from how so much of the politics around technology comes about because I think what people latch on to are these really visceral examples. And we actually used one of them in this conversation, which is the fax machines, right? Like, oh, that is so old and obviously so bad. But like the key question there might be the fax machines, right? That might be the worst part. But like the philosophy that you're espousing doesn't inherently assume that, right? You would look at what are the 15 steps of this process? How can we simplify, condense, subtract, and make something better? And yeah, it's probably in this case going to result in getting rid of the fax machines. But like the paradigm is about outcomes. The paradigm is not about shiny technology for shiny technology's sake. And I really admire that. Well, thank you. I don't, I'm not a technologist. I mean, people sometimes say I'm a programmer and I'm really not. I think the last time I did anything with code, it was like HTML. But I care about meeting people's needs. Like that is what government is for. And that means, I mean, people think about it in terms of service delivery, which has been the bulk of my career. But, you know, I did four years on the Defense Innovation Board. I don't think people say like, they don't talk about what the Department of Defense does as meeting their needs, but it is meeting a need for security. Yeah. Right. And it is meeting a need for the ability for, you know, the sovereignty of our country. That's very complicated right now. But that's what it's all about to me. And that comes, you know, you asked about my background. When I worked at that child welfare agency my first year out of college, what I saw was people in need and a system that was not great at helping them and quite expensive. Lots of people doing a lot, trying really hard, and not taking care of kids the way we should. And that was unfortunately not unique. The child welfare system in many states really leave a lot to be desired, and so do many other systems that we absolutely need to put the best our country has to offer on those problems. problems. Technology is part of that. People's expectations for their, you know, they expect government to work well and they have a sense of what government could be doing because their personal lives, things are a lot easier, you know. And so we need to meet, we also just need to like make them feel like government is trying and using their tax dollars well so that people have trust and faith in government in everything from, you know, an easy transaction at the DMV to, you know, a system that takes care of the, you know, kids who'd have to be understate, you know, under the, the, the, the word of the state, like all those things, all the, all those, all those things contribute to a sense that democracy works. Yeah. And The AI part of that, I think, comes through as like what matters is the outcomes. Yes. And maybe it is the case that the best way to get us to better outcomes is using AI for X. But like the focus, the northern star, the guiding light always has to be the outcomes. Yeah. Right. And if we can do that with some fancy AI system, great, let's do that. But if we can do that with like a hammer and a screwdriver, let's do that instead. Right. Keeping the focus on the outcomes. Now, that said. There are a lot of uses of AI that like are just like a pivot table would do it. Right. Can we please not use like a million tokens to do something Excel or like a regular expression could do? Right. Like any time a government organization has framed the problem as we need to be using more AI. Yeah. Terrible idea. Completely the wrong. Wrong, wrong, wrong. And that said, you know, you and I are both technology optimists. So I hope people, you know, believe us when we say like what matters is the outcome, not the use of AI to deliver the outcome. But AI does, as we've already talked about in this podcast, enable some really attractive stuff that used to be expensive and complicated in the past. And maybe now is, you know, within the reach of affordability and feasibility because of performance improvements that are that are viable within AI. So are there any other applications that you're excited about as you think about your work and the stuff that you're tracking in the community that you've built around these issues, AI use cases that strike you as particularly exciting and attractive for government to pursue? I mean, these are some of them are sort of along the lines of what I talked about before. But one example is state of Maryland has 4,500 job classifications. Whoa. makes it kind of hard to hire. It's just, it has a whole, it makes it hard to hire. It makes, it creates a whole bunch of different problems. But you just realize it consists on that complicated. I want to give my two cents on this because I never worked for the state of Maryland, but I have a guess as to why this problem exists. And so in the DOD, you cannot hire people for job classifications that do not exist. And so when we were like, we want to hire an AI engineer. Right. There's like, there's no such thing as an AI engineer. Here is the list of all the jobs that have ever existed or ever will exist. It was completed in 1944. And all we have is automatic data processing engineer. That's right. And so my suspicion is that that proliferation of 4,500 job categories maybe now is stupid and bad, But every person who increased the size of the list was probably solving a problem that they faced, which was I can't hire this person because the type of job that they are doesn't exist on our list. So the solution for me is to add to the list. Whereas that's the micro what's best for me solution. Whereas the obvious macro solution is like, why do we make it impossible to hire people unless they're on this list? You know, I suspect that's what's going on. It's something very, yes, I think it's that. And it has to do with comp. People are making up job classification to get somebody the comp that they, you know, because it's like otherwise it's like a $50,000 a year job they're never going to be able to get to me. And I don't know all of the details, but you are correctly assessing the general shape of the problem. But, I mean, the point is, again, like good luck trying to simplify those without the help of some AI tools. There's, but, you know, in general, I think, like, we're going to see really custom models that are targeted at really specific things that I think will be helpful. you know, determinations of another unemployment insurance example is a paper from the guys at Stanford's Reg Lab, you know, where they're trying to help the adjudicators, you know, that have to determine if you're applying for unemployment insurance, like, did the terms of your separation from your old job actually, like, match, you know? That's actually, like, a hard thing to figure out. It's not as simple as, like, you know, if you apply for SNAP, It's like really just like how much money do you make and do you, you know. Yeah. There's human judgment involved with that and they're getting tools to help the adjudicators do that that faster. That's really like a very fine-tuned model for a very specific problem. And then there's the like bigger picture issue of like how would you point AI at like this, you know, reengineering that process entirely. And I think I would like to see AI take a bigger role in helping us think about tradeoffs, right? So often we just like, you know, tradeoff denial is one of the key dysfunctions of government, of business, of everyone. Like, no, no, we have to have this and we have to have this and we have to have this. Those are all good things. Yeah. But individually, their sum total is not good. It makes this system not work. So, for example, if you're looking at like, you know, the very fine-tuned questions that you have to ask to decide if somebody is eligible for unemployment insurance under this, you know, these interpretations of the law and policy, you can create tools to make that happen and make that better. Or you could sort of say like, hey, like, do these distinctions actually matter that much? Like, are we giving like, you know, you could have different goals for the system. One could be, you know, that more people who get unemployment insurance are, you know, get hired again. You know, they don't have a devastating economic event. So they are able to get another job. This is good for the economy and it's good for people. Like if that is an outcome you're going for, you could you could try to measure, you know, does like distinguish between this like, you know, 1A, C, you know, to like or 1A, B, you know, like these very different definitions of termination from your job actually make any difference to that outcome and decide we don't need to get in. You know, let's just make the rules about, you know, separation from your job a little bit simpler. That's the kind of thing. And then, of course, as you take it up the stack, is employment insurance even the right program to be doing right now. But we can use it, I think, for asking higher and higher level questions and helping us figure out how would we decide that this thing that we are investing tons of energy and time in determining makes any difference to the things we actually care about. Yeah. Yeah. So your book, Recoding America, did not exist when I joined government, but you'd written enough, you'd spoken enough, and you'd built a community of folks where you were powerful enough to influence me. And I'd love to ask you if you were talking to somebody who is just getting started in government today and they're passionate about what we've been talking about in terms of improving the efficacy of government, improving the efficiency of government, and actually delivering outcomes at wherever they're working, whether that's, you know, Veterans Affairs, DOD, Social Security, Medicare, Medicaid, education. education, wherever they're at, and they're passionate about these issues. What's like the crash course reading program that you would put them on day one, you know, to sort of get smart on these issues? Let's say they're good at technology or they're good at product design or project management, but they are just getting started in terms of being familiar with the ideas that we've been talking about today. What's like the crash course reading list you would give them, other than your own book, of course. Yeah, thank you. I appreciate you bringing it up. There's starting to be, you know, quite, I think, a set of work on this topic in various directions. My friends wrote Hack Your Bureaucracy, which is really valuable. That's Marina Nitsa and Nick Sinai. Marina's in my book a lot and somebody I take a lot of inspiration from. She and two different colleagues have a book coming out called Crisis Engineering. That's in a couple months. And so put that, you know, pre-order that and put it on your list. I think it's important to read Nick Bagley's piece called The Procedure Fetish. It helped me really understand where that obsession with procedure comes from. Was this in the Atlantic? I'm trying to remember who it is. It was a law school law review piece from which law school I don't recall, but it was republished in a sort of slightly more accessible form on the Niskanen Center website, where I'm also a fellow. And Nick has a book coming out too, um, uh, in a couple, I think at the end of this year, that is fabulous, um, that would be useful. Um, uh, along that, those lines, I mean, this is general, not so much technology, but I am a fan of Mark Dunkelman's book, Why Nothing Works, um, which, you know, speaks to the real consequences of that over proceduralization to building the world that, you know, that we want to live in. Oh, I wish I had more time to think about this. There's so many good things to read. If you're a technologist, I'd also recommend Marianne Bilotti's Kill It with Fire. That's really about how to modernize legacy IT systems in government. And, you know, It's not what it sounds like. I think in the age of Joge, that sounds like, you know, blow it all up. And that's, of course, not what she means at all. But she doesn't, what she also doesn't mean is sort of, you know, multi-year lift and shift kind of process. One of the things she says in the book is like even people who are really dedicated to agile ways will often sort of like somehow retreat into very waterfall methodologies when faced with doing a legacy upgrade. And they don't work. You really do have to sort of figure out these different tactics. I thought that's really helpful. But, yeah, maybe I can probably think of some others and give them for the show notes because there's a lot to read these days. Yeah, I'll just add one, which was on the reading list produced by Kessel Run, which was an organization very much influenced by your ideas and your work on the Defense Innovation Board. This was an Air Force software organization. And they had a novel that was really like a textbook, but disguised as a novel called The Phoenix Project. And that one was full of aha moments for me. And that book is actually the spiritual successor to a book that is taught in most business schools around America called The Gold by a lot of gold rat. And that is very much a sort of start from the outcome that you want and work backwards type thinking. And both of them, again, are basically textbooks disguised as novels, which is, you know, for many people, including me, a more pleasant way to read this kind of stuff. But Jen Palka, you're somebody I've admired for a really long time. So it is a genuine privilege to have you on the AI Policy Podcast. Thank you so much for coming on. It's been a really fun conversation. Thanks a lot for having me. And may I also add just I have a sub stack called Eating Policy that people can follow if you don't mind. And they should. Good. Thank you. Thanks for having me. Thanks for listening to this episode of the AI Policy Podcast. If you like what you heard, there's an easy way for you to help us. please give us a five-star review on your favorite podcast platform and subscribe and tell your friends. It really helps when you spread the word. This podcast was produced by Sarah Baker, Sadie McCullough, and Matt Mann. See you next time.