Lawfare Daily: Why AI Won't Revolutionize Law (At Least Not Yet), with Arvind Narayanan and Justin Curl
44 min
•Feb 12, 20262 months agoSummary
This episode examines why AI may not dramatically reduce legal service costs despite impressive technical capabilities. The hosts argue that structural features of the legal profession—including regulatory barriers, adversarial dynamics, and the need for human judgment—create bottlenecks that prevent AI from delivering the transformative productivity gains many predict.
Insights
- AI is a 'normal technology' requiring decades of organizational and institutional change to realize benefits, not a revolutionary force with immediate impact
- Adversarial legal dynamics create arms races where both sides using AI increases total work output without improving outcomes, similar to high-frequency trading
- Unauthorized Practice of Law regulations create ambiguity and chilling effects that prevent efficient AI integration, yet current rules fail to protect consumers effectively
- Human involvement remains a bottleneck not just for normative reasons but practical ones: judges and lawyers have finite capacity to process increasingly complex AI-generated work
- Regulatory sandboxes and competitive certification models could replace restrictive UPL rules while maintaining consumer protections
Trends
AI capability improvements don't automatically translate to professional productivity gains due to organizational and regulatory constraintsAdversarial professions face 'arms race' dynamics where AI adoption by all parties increases costs without improving relative outcomesRegulatory uncertainty around AI in professional services is creating competitive disadvantages for innovation-focused firmsAccess to justice crisis may be partially addressable through AI-enabled legal services if regulatory barriers are reformedProfessionalized fields with guild regulations show slower AI adoption than unprofessionalized cognitive work like software engineeringParallel-track legal systems (human judges for high-stakes cases, AI/arbitration for routine matters) emerging as pragmatic reform approachDiscovery process digitization created cost inflation rather than savings, suggesting similar dynamics may occur with AI-generated legal workDemand for human decision-makers in law may persist for legitimacy and democratic control reasons, not just capability reasons
Topics
AI Adoption in Legal ServicesUnauthorized Practice of Law RegulationsAdversarial Dynamics and Arms RacesAccess to Justice and Legal AffordabilityRegulatory Sandboxes for Legal TechAI Capability vs. Professional IntegrationHuman Judgment in Judicial Decision-MakingDiscovery Process and Litigation CostsProfessional Guild RegulationsCredence Goods and Legal Service QualityTransactional vs. Litigation AI ApplicationsElectronic Communications Privacy Act ReformAI Safety and Democratic ControlDiffusion of Innovation FrameworkComparative Professional Regulation
Companies
People
Arvind Narayanan
Princeton computer science professor and Center for Information Technology Policy director; co-author of research rep...
Justin Curl
Harvard Law School third-year JD candidate; co-author of research report examining structural barriers to AI adoption...
Sayash Kapoor
Princeton PhD candidate; co-author of research report on AI's limited impact on legal service costs
Alan Rosenstein
University of Minnesota law professor and Lawfare Research Director; host and moderator of the episode
Gillian Hadfield
Legal scholar whose work on credence goods and professional regulations forms theoretical foundation for the analysis
Paul A. David
Economic historian cited for analysis of electricity adoption delays and organizational restructuring requirements
David Graeber
Author of 'Bullshit Jobs' book examining how competitive dynamics create unproductive work across professions
Quotes
"The amount of work that each side does could essentially just go up because now both sides are being hyperproductive with AI instead of writing like one motion or writing five pages or looking at 100 cases. They're now doing 100x that in all of those relevant domains."
Justin Curl•Opening segment
"Judges make law a lot of the time, and exercising human judgment about what we want the world to look like, that's the perfect example of what I would want humans to be doing in a world where all conceivable labor can be automated."
Arvind Narayanan•Mid-episode
"I think this is what it means to be in control of our own civilization. All those debates about AI safety, this is actually what it boils down to for me. Not killer Terminator robots. These kinds of moments where we put the course of humanity in the hands of machines, I think that's a line we should not cross."
Arvind Narayanan•Late episode
"The insight of AI as normal technology is that we can and should learn from these past general purpose technologies, and AI is not exceptional in that way."
Arvind Narayanan•Early episode
"I just am not very compelled that they're actually doing that good of a job of it right now. It seems to be making things much more expensive. And there's some people who've passed the bar who give horrible legal services."
Justin Curl•Closing segment
Full Transcript
The Electronic Communications Privacy Act turns 40 this year, and it's showing its age. On Friday, March 6th, Lawfare and Georgetown Law are bringing together leading scholars, practitioners, and former government officials for Installing Updates to ECPA, a half-day event on what's broken with the statute and how to fix it. The event is free and open to the public, in person and online. Visit lawfaremedia.org slash ECPA event. That's lawfaremedia.org slash ECPA event for details and to register. The amount of work that each side does could essentially just go up because now both sides are being hyperproductive with AI instead of writing like one motion or writing five pages or looking at 100 cases. They're now doing 100x that in all of those relevant domains. So the amount of outputs has increased, but because the outcome that clients ultimately care about is settling favorably or winning at trial, it takes much more work and much more outputs to reach that exact same outcome. It's the Lawfare Podcast. I'm Alan Rosenstein, Associate Professor of Law at the University of Minnesota and Research Director at Lawfare. I'm talking to Justin Curl, a third-year JD candidate at Harvard Law School, and Arvind Naranian, Professor of Computer Science at Princeton University and Director of the Center for information technology policy. Judges make law a lot of the time, and exercising human judgment about what we want the world to look like, that's the perfect example of what I would want humans to be doing in a world where all conceivable labor can be automated. Today, we're discussing their new lawfare research report, co-authored with Princeton PhD candidate Sayash Kapoor, arguing that despite AI's impressive capabilities, structural features of the legal profession, from guild regulations to adversarial dynamics, mean that the technology may not deliver the dramatic cost savings that many predict. So I'm excited to get into the paper that you and your co-author Sayash Kapoor have written about the effect of AI in the legal profession and specifically why it might not provide the sort of cost savings that everyone is predicting, and at least some people, though perhaps not lawyers are hoping for. But before we get into that, I want to take a moment to talk about the broader framework of how you're all thinking about this, which draws on this broader project that you, especially Arvind and your collaborator Sayosh, have thought about and have written a great book about and done a lot of great writing about, which is this idea that AI is a normal technology. So just before we get into the law part of this, just sketch out what you mean and particularly what you mean by a normal technology. Definitely. Let me start with a historical example. Way back when electricity became a thing, there was a lot of hope that it would enable factory owners to rapidly achieve a lot of cost savings by, as I understand it, replacing those big, messy steam boilers with electricity. But there's a great analysis of this by economic historian Paul A. David. And it turns out when they first started doing that, it didn't really seem to help. And it took 40 years, something like that, to really figure out how to gain the benefits of electricity. And that's by taking advantage of the fact that electricity is a much more portable kind of technology and you can move it and generate it wherever you want. So that required restructuring the whole layout of factories, going towards the logic of the assembly line, changing how firms hired and paid and trained workers, that sort of thing. And these kinds of downstream innovations, again, took a period of decades. And really, the insight of AI as normal technology is that we can and should learn from these past general purpose technologies, and AI is not exceptional in that way. And a lot of the discourse starts from rapid improvements in AI capabilities to drawing a straight line to societal effects or effects on any particular profession or the economy. And our view is, no, that's not going to happen. There are various stages in this pipeline. So we take this existing framework from the theory of diffusion of innovations and apply it to AI specifically. And we have four stages. The first stage is improvements in capabilities. The second one is how those get translated into better products in law or any other particular domain. The third stage is workers starting to adopt these products and learn to use them. And the fourth stage is really the hardest one. That's where you need to make organizational changes, changes to laws, norms, business models, etc., which we get into in this paper. But really, the overall framework is about looking at all of those four stages in order to understand the speed and nature of AI impacts on any particular profession, as opposed to merely looking at, you know, what is the latest performance of GPT 5.2 or whatever. That's great. And let me just dig in a little bit before I turn to Justin to sort of set up the law part of this. To be clear, and I just want to make sure that I'm understanding the argument correctly. When you say normal, you don't mean non-transformative because of course, electricity was quite a big deal. Automobiles were a big deal. The Neolithic revolution was a big deal. I mean, a lot of these things are big deals. It sounds like what you're saying though, it just takes a lot longer than people think. I think there's a quote, I think it's often associated with Bill Gates and maybe it's one of these quotes that's associated with many people, which is that people systematically overestimate what can be done in the short term, but then they tend to underestimate because people are bad at compound math. They tend to underestimate what's going to happen in the long term. Is it fair to say that you are at least open to the possibility that in the longest term, AI will have a deeply transformative effect, even if it takes quite a long time and there are roadblocks and reforms and a lot of messiness that, as you pointed out, the latest frontier math accomplishment that GPT 5.2 Pro or whatever is going around X does not really capture. That's exactly right. We are pretty optimistic about the effects of AI in the long run. We do have much longer timelines than a lot of people talking about this. But also, we emphasize the agency that's involved. I think these effects are not preordained. I think we need to get a lot of things right in terms of reforming our institutions in order to be able to take advantage of these benefits. What interested you in particular, because, you know, Justin's a law student, I'm a law professor, sort of we are professionally invested in the legal field. You're a computer scientist. So there are any number of fields that you could and I'm sure have in your research applied this to. I'm curious if you think that the legal field in particular is a good test case, or if it has some unique elements relative to other possibilities. For you in particular, what was interesting about the legal angle here? Yeah, lots of things. So let's contrast a few different fields. On the one extreme, I would say is software engineering, which like law is a purely cognitive field. And unlike, let's say, medicine, which I'm going to put on the other extreme, although it's not really an extreme, I want to, I want to look at the range between these three fields. So in software engineering, being a purely cognitive field, we're starting to see the impacts of AI very quickly. But an important way in which it's different from law is that it's not professionalized. There aren't a whole bunch of regulations about who can do software engineering, how software engineering can be done, various kinds of liability, none of that stuff. And so the impacts are starting to hit very, very quickly. On the other hand, in medicine, you know, like law is very professionalized, but also you can't, as an individual, just start, you know, using a model for self-diagnosing yourself. You can, but you very quickly run into roadblocks. Yeah, I think tens of millions of people might disagree with you there, Arvind. Yes, for sure. Yeah, they are. But the problem is you can't prescribe yourself something, right? And so you hit a roadblock in terms of interacting with the system. Law is very nicely in the middle. It is purely cognitive. And so there is clearly a lot of great potential. But at the same time, it is very, very professionalized. There's all these regulations that we discuss in the paper. And the reason it's so interesting to look at this profession that's in the middle of these two extremes is that you can really imagine and spell out what are the kinds of reform that will be needed in order to really take advantage of this tremendous possibility. And those are things that, you know, the profession can start doing now. It's much harder to articulate that in medicine. So an example is people say, oh, we're going to be able to dramatically speed up drug development. Well, the problem is the hard part of drug development is not discovering new molecules. It's, you know, testing them, the human trials, which take 10 or 15 years, highly regulated, et cetera. It's really hard to articulate how you can compress that down to, let's say, a few months without, you know, throwing away what we've learned about testing drugs safely. But when it comes to law, it's not that kind of thing. You can actually imagine and spell out the reforms. And I think that's what Justin has tried to take the lead in doing in this paper. I'll just jump in briefly. I think one way that I also think about the AI as normal technology framework is just as a prescription of where we should be focusing our efforts. I think it's really easy to sort of view AI as this like all encompassing tsunami where there's nothing any individual can do to fight back against it or can intervene or shape its development. But AI as normal technology tries to identify systematically what those organizational and societal bottlenecks are so that you can know where you should focus your efforts if you're trying to ensure that sort of AI diffusion is positively impacting people. So I think one way of looking at your paper is an exploration fundamentally, you know, less about AI and more about why law is so expensive, right? Why the practice of law, why the products of law are so much more expensive and why, you know, we don't see the productivity gains in law that we do with, I don't know, flat screen TVs or many other things. And so you identify sort of three structural reasons that legal services are expensive, even before AI enters the picture. So you talk about law being a credence good. You talk about how the value of a legal service is often relative rather than absolute. And then of course, there are these professional regulations. So can you just sketch out for sort of the non-lawyers in the audience, what is it about law that makes it so expensive, sort of ex-ante before we can start talking about any technological developments like AI? Yeah, of course. And I think here it's really important to sort of nod towards Gillian Hadfield's work. A lot of this, it comes directly from papers she's written over the past two decades about this. They're excellent. I recommend people check them out. Starting with the first sort of reason about credence goods, I think one reason law is unique is because it's very hard to evaluate the quality of legal services, even for lawyers and experts in the field. You could imagine if I'm engaged in like this complex year-long trial, it's hard to know whether a particular motion filed or like a particular decision in one sentence about how to frame a topic is the reason why the client reached the outcome they want. And so instead of being able to directly assess the quality of legal services you sort of forced to rely on reputation or things like how prestigious was the law school that someone went to or things like that And so that makes it very hard to have a functioning market when you think about legal services The second reason about the value of legal services being relative is also it hard to have like an understanding of legal services in the abstract It's not like I can look at that and be like, that's a seven legal services. If I'm engaged in litigation, oftentimes whether I'm able to achieve the result I care about depends on what the other side is doing. So if my contract term is really good, it might depend on what the other side is doing and how they're thinking about it. And then the final sort of reason that, again, Professor Hadfield identifies is there's a very complicated regulatory framework. There's two types of, I think, regulations that are relevant here. The first is UPL or unauthorized practice of law regulations, which limit who is allowed to provide legal advice. That is defined incredibly broadly. So anytime you apply legal knowledge to specific facts, you might be engaged in the practice of law. If you do so without permission or you're not a licensed attorney, that's actually a felony in a lot of jurisdictions. So that's already one reason why maybe some of these companies, when their chatbots are providing things that look like legal services, they might actually start to run into some liability for it. And I'll also jump in and just add to that. So I am a lawyer, believe it or not, despite being a law professor who I would not recognize I don't recommend hiring for legal advice. I am technically a lawyer. I'm barred in the fine state of New York. And when I moved here to Minnesota, at some point, I was just curious, okay, well, what does legal practice look like? Because I don't intend to really practice law anymore, but I don't want to accidentally, you know, practice law and not doing the right thing. And so I looked at the New York bar rules. And what's amazing is there was all these paragraphs about what the practice of law might be, but then they specifically refuse to say what the practice of law actually is, and that they actually also won't tell you. So it is an almost Kafkaesque situation where there is such a thing as the unauthorized practice of law. It's quite a big deal to do it, but no one will actually tell you what constitutes the unauthorized practice of law, which I have to assume just has sort of a chilling effect on this whole industry. I think there's two really good examples actually connected to that that we talked about in the paper. I think one is, if you look at what the New York Bar Association has said, they've been like, well, chatbots, they might be the practice of law, it seems like it's getting close. And so I don't really know what I'm supposed to do with that if I'm a lawyer thinking about or even a consumer thinking about using AI. What units is close measured in? I'd be very, very curious. How many GPT units is close in this context? Yeah, that's also what I want to know. Can I ask you if either of you know if there have been any lawsuits against the major chatbots? I mean, I use it all the time for little things like reviewing contracts, and I assume that many people out there are doing that. So presumably, these chatbots, you know, at least in my case, are providing legal advice that's tailored to my situation. So I wonder if people have been trying to sue them. I don't personally know of any of these. I mean, obviously, there are other the automated legal services is a thing that predates these chatbots. And I think there have been some legal, at least regulatory challenges to some of these, you know, I don't know if it's the legal zoom has has faced these challenges, but sites like that. There are also the countervailing First Amendment considerations where you couldn't get the guild could not get too aggressive about this because people also have sort of First Amendment rights to, you know, talk to a chatbot about their about their legal issues as well. But I'm curious, Justin, if you've heard of anything. Yeah, I haven't seen anything focused on AI chatbots specifically, but I think LegalZoom is a good example where over the past two decades, they've been sued countless times and they've had to rework the actual way that they provide legal services because they've had to reach very expensive settlement agreements. And what's interesting about that is who the plaintiff is. Like you ultimately need a plaintiff to bring the lawsuit. And so sometimes that's the attorney general of a state, but sometimes that's actual individuals who have received the legal services. So maybe if ChatGPT gives bad legal advice and someone's upset, we might see a new lawsuit about it. So I want to get into the bottlenecks that you all go through in your paper. But before I do, I want to address sort of one thing that's not in the paper that actually might be surprising if you're reading a paper about skepticism that AI will automatically lower the cost of legal services. And that is what none of you are arguing is that AI won't be able to do the actual individual cognitive tasks. There are a lot of people that think that AI is quote unquote fancy autocomplete or stochastic parrots or just a giant plagiarism machine. There are lots of ways of dismissing that. And that'll never have the skill or the creativity to be a really good lawyer. Am I right? And let me ask you, Arvind, since you also have sort of a broader sense of the AI landscape across different cognitive domains, that's not what you're arguing. It seems like your paper is happy to concede the possibility, and maybe you all actually believe this. I certainly do, but I'm not a computer scientist, that already today, and certainly within several years, on any discrete, even large-scale task like write me a Supreme Court brief, you know, GPT-7 may actually be quite capable of outperforming all but the absolute elite lawyers, and even for those elite lawyers, at a tiny fraction of the cost that it takes to, you know, hire Paul Clement to argue your case. And so all these bottlenecks are actually totally separate from the raw capabilities. Is that a fair articulation? That's mostly where we land. That's right. We're not capability skeptics. So let's divide it into two ways of looking at it. One is some of the current limitations, such as hallucinations or not really having access to all of the documents that it would need in order to do a good job in your case, that sort of thing. These are all easily fixable in our view, you know, especially when you consider the long term of AI development, they're going to get fixed. But then you do get into some gray areas. So what it means to write a good Supreme Court brief is not something unlike, let's say, coding where there are correct and incorrect answers. People are going to disagree about that. These are matters of judgment. So we do think there are some limits there in terms of how good AI can get because it learns from feedback and it's not going to be that easy to learn from millions of cases of feedback where AI creates an argument and then that brief is submitted and then you get to learn from what was the result in that case. That feedback loop is extremely slow and so you're not necessarily going to see the kind of rapid capability progress that you see in, let's say, math or software engineering. Nonetheless, that's not where our skepticism comes from. We are acknowledging that there might be a day where AI is able to do any precisely specifiable cognitive task that most lawyers are able to do. All right. So let's now then jump into the first bottleneck, which is what you're talking about just recently, Justin, this question of regulatory barriers. So explain sort of how that could get in the way of AI really revolutionizing and lowering the cost of legal services, especially, and Pitches says maybe a bit of a counter argument, even if, at the end of the day, the legal landscape, you know, if you go to law school in, you know, 2029, 2030, even if the legal landscape superficial looks very similar, there are a bunch of law firms, some are very big, some are medium, some are small. If all of these law firms are using AI integrally, right, if these law firms are essentially kind of wrappers around these models, why isn't that enough to really have AI revolutionize legal practice? Yeah. And I think it's important to distinguish between two ways that someone could receive legal services. I think the one that you just mentioned with law firms, that more directly implicates the entity regulations piece of professional regulations of the law. And so that limits who can own equity in a legal services business. So I think it's no surprise that all of the law firms are owned by lawyers because you have to be a lawyer in order to own a law firm. What again, Jillian Hadfield points out is that this can create very inefficient business models. In a lot of the smaller practices serving individuals and small businesses, lawyers work eight hours a day, but of those eight hours, only about 2.3 of them are actually doing billable work. The other six hours of that is just doing administrative tasks and sourcing clients, things like that. And so even if AI is very advanced and capable of performing a lot of legal work, the way that AI is integrated into the business might actually be a lot less efficient because of these constraints on how those businesses are run. But let me actually push back on that a little bit because there's a whole cottage industry right now about using multi-agent clawed swarms to go out and find your clients and to do all your invoicing and all of that sort of stuff, right? It seems to me that one of the things that AI could do, in fact, and this is certainly how I try to use these AI tools, which is to sort of automate the administrivia of my life, whether it's as a teacher or a researcher or a consultant or whatever the case is. So, I mean, I do wonder if there's a possibility here for these AI systems to be, you know, exactly actually the thing that a mid-sized firm that is run because of guild rules by lawyers who, God bless whatever skills we have, management is often not one of them. You know, maybe what we need is clawed code, actually, just for that purpose. Why isn't that an answer to the management inefficiency problem? Yeah, so I actually think this is a very good application of AI in part because I think it is a nice niche that is not necessarily covered by the unauthorized practice of law rules. So if I'm outsourcing clients, very few people think that that counts as practicing legal services. So this regulatory barrier actually wouldn't really cover those set of applications. And so one great way to make a lot of smaller firms much more efficient is to go out and automate a lot of the tasks that are taking up their time so that they can spend more time providing legal services. And so I think this is a great application of AI and maybe actually helps prove the point because it shows that when there aren't those regulatory barriers, you can actually use AI to make it much more efficient. I wonder, even in some of these administrative tasks, if there are competitive dynamics, certain kinds of paperwork, certainly, you know, there's a fixed amount to get done. But something we say later on in the paper is that one of the big barriers to productivity improvements actually translating to a better version of legal services is that there are kinds of arms races. We talk about arms races between plaintiffs and defendants, and I'm sure Justin will say more about that. But one of the kinds of arms races that can happen even in the more management kind of work is you talked about going out there and finding clients. Well, these are going to be tools that every firm is now going to be using to kind of level up how effectively they can do that. So what's going to be the end result of that process? That seems hard to anticipate. And we seeing in other cases for example in scientific peer review for instance there are these arms races between authors using LLMs to try to improve their productivity and reviewers using LLMs to try to automate some of the aspects of reviewing And it leading to some very unhealthy kinds of equilibrium or not an equilibrium and perhaps it leading to a kind of death spiral So we should be careful about things that at first appear to be productivity improvements, but can in fact, upset existing kinds of balance and end up removing certain useful kinds of friction from the process. Well, so that's great. And actually, let's use that to then pivot to the second bottleneck, which is this sort of adversarial point. Arvin teed it up, but Justin kind of riff on that. I mean, I think everyone has sort of an intuition that law is a somewhat adversarial profession, but to talk more about that and how that might lead to AI being sort of largely a wash when it comes to the provision of legal services. Two things on this. I think the first is it's important to understand like, when are we ending up in this world where this becomes the predominant bottleneck? I think we're, even if we're in a world where AI is being used very widely and it's being used to make lawyers much more productive, this is still a constraint because if you give both sides access to AI and you're sort of locked in this zero sum process, the amount of work that each side does could essentially just go up because now both sides are being hyperproductive with AI instead of writing like one motion or writing five pages or looking at a hundred cases, they're now doing a hundred X that and all of those relevant domains. So the amount of outputs has increased, but because the outcome that clients ultimately care about is settling favorably or winning at trial, it takes much more work and much more outputs to reach that exact same outcome. And so although AI has made both sides more efficient, you end up doing a lot more work. The second thing on this, and maybe the historical analogy here is the discovery process. A lot of people thought that digitization was going to make discovery way, way easier because you can now just control F for documents. So it's much easier to find the relevant documents. What they didn't expect was now both sides, there's just much more documents being created. Digitization means you can now request a lot more documents and share a lot more documents. And the net result is now discovery consumes like half the time that first year associates spend doing. And they're also, it's become one of the most expensive parts of litigation and litigation costs have not actually come down. They've, if anything, gone up in a lot of the complex cases. How much of this is about, is a litigation story versus a sort of general law story, right? So again, I think most people wouldn't think of law, they think of litigation, and that's obviously a large part of it. But litigation is only one, I'm not even sure it's the plurality, frankly, of legal practice. I suspect transactional work is actually, right, especially when you include smaller scale stuff like, you know, wills and things of that nature, is probably, again, if not the majority, then the plurality of legal work. And then, of course, is a bunch of in-house stuff. So how much of this adversarial kind of arms race problem is a litigation story? And how much of it also bleeds into, let's say, transactional work? I think some of it definitely bleeds into transactional work. I think it, again, depends on the dynamics within transactional work. When you touched on wills, to me, that doesn't seem like there's a clean adversarial process because the goal is just sort of to match the intent of the person who wrote the will. It's not who's on the other side. Exactly. You know, God gets it all in the end. And then in some sort of transactional context, though, like say you're negotiating a merger between two parties, that to me starts to seem a lot more adversarial. Like oftentimes transactional lawyers distinguish their work from litigators by saying, no, no, no, we're much more positive some, it's much more collaborative. But at the end of the day, if how you draft your contract provisions, what you choose to include, what you disclose to the other side, there's a very fine line between what is and is not okay. And how you skirt that line can actually translate into advantage for your side. And so you may end up using AI to take advantage of that. Arvin, let me actually go back to you. You mentioned sort of earlier that obviously law is not the only place where you have these arms races. You gave some examples. The examples I was thinking about was actually, for example, trading, right? Which seems like a perfect example of this, right? We've already seen this before AI, where you have these sort of massive high-frequency trading outfits that are spending God knows how much money, but it's sort of not clear that they're making anything necessarily that much better because there's just someone else on the other side. I'm curious, again, zooming out and from your work thinking about AI as a normal technology across the entire economy, how much of the economy, how much of a productive economic worth is vulnerable to these sorts of adversarial conditions where the result of AI is not really lower cost, it's just everyone using AI more to sort of try to beat each other. Yeah, it really comes up everywhere. Trading is, of course, a perfect example. I remember 10 years ago, there were proposals for exchanges in the middle of the ocean, something like that, because the speed of light was becoming a constraint in high frequency trading. And so that's an example where I don't know if they ever ended up actually building it in between London and New York in the Atlantic Ocean. But it's a perfect example of sinking a lot of money into something that brings benefits that are purely relative. If neither side has access to it, you haven't lost out on anything. Your trades are a fraction of a second slower. You can't argue that that's actually a benefit to society to build these things in the middle of the ocean. So yeah, these dynamics come up literally everywhere. We just talked about peer review. But there's this great book called Bullshit Jobs by David Graber. And he has, I think, five different categories of bullshit jobs. But one or two categories of them are all about how so many different jobs in every different occupation are not necessarily about providing the service better, but doing it better than your competitors. And so better on that dimension doesn't actually translate to better service for consumers. So this is not specific to law. It comes up really all across the board. Yeah, I remember that book. I think the essay from which it comes from is even better because it's a nice tight read. And I do recall Corporate Lawyer was one of the main examples that he gives. All right, so let's not turn to the third bottleneck, which is this need for human involvement. So, Justin, what is this need for human involvement in the law? Why can't we just have, you know, robot lawyers arguing in front of robot judges while I sip, you know, daiquiris on the beach? Well, OK, so you definitely could have that world. I personally would not really want to live in that world. I think even the most sort of AI pilled people out there are still hesitant about the idea of turning over judges in society to AI. I think there's also compelling constitutional reasons not to do that, namely Article 3. But putting that aside, I think the human element is assuming we want human beings to be involved. There is a limit to how quickly judges can process cases. So if you imagine sort of this going back to this litigation example, both sides are producing a bunch more work. They're writing much more sophisticated briefs. They're citing more cases. Also, it probably becomes easier to file lawsuits. So there's just a lot more lawsuits. As that happens, the new bottleneck becomes the time it takes for judges to adjudicate those cases. Going over to the transactional side, I think the new bottleneck, as contract provisions get longer and these negotiations become more complex, I think the bottleneck becomes the ability for human lawyers to actually understand what's going on on behalf of their clients. I, for one, would want to be in a world where corporations know what they're signing up to when they're signing up for contracts. And so I think having someone inside of the corporation who understands this is sort of the final bottleneck. So even though AI makes things a lot faster, there is still this thing of how quickly can human beings work. So the normative question about do we want human judges, human decision makers, the legal question of is a constitutional requirement, that's interesting. Let's put that to a side for a second because I think the kind of psychological assumption or the empirical assumption that is a matter of human psychology, is a matter of what is sometimes called sociological legitimacy, which is whether or not the system is a good one, do people perceive it to be a good one, Whether that requires human decision makers, I'll admit, and maybe this just shows how out of touch I am with actual human beings and that I should log off cloud code more. It is not obvious to me, actually, that there is going to be such a demand for human decision makers outside, you know, let's put criminal law, for example, to a side, but outside sort of the most high salient context. I mean, it certainly seems to me that the sort of story of modern human sociability over the last 20 years is the increasing replacement of sort of human connection and human engagement with digital connection and digital engagement. And again, that may be a bad thing, right? That may be quite possible. But it still appears to me to be a thing. So I'm curious for your thoughts, Justin, about whether that might be a possibility in the legal sphere. And then also zooming out, Arvind, for your thoughts about how, you know, we might think about that in other domains, right? Because you could presumably tell the same story about education or mental health therapy. But at the same time, I don't know, maybe it's possible that in 10 years, we'll all be using chatbots as the bulk of mental health therapy. And people just got sort of used to that because, you know, humans are malleable creatures. So let me start with Justin and then I'll move to Arvind. I know you said put the normative considerations aside, but I have to fight the hypothetical on this one, just because I do think if you're making a decision about whether someone has like 10 years in prison or not, that is such an important decision that it carries such moral weight that I would want a human being to be involved in that. Sure. But again, like criminal law is still relatively small percentage of legal practice. And I think the sort of economic story that you're talking about is also more relevant to commercial litigation and the commercial practice of law than the criminal practice of law, which is why I'm sort of curious about the sort of like 90% of legal disputes that are not as high stakes or salient. Yeah. And so I think this is where one of our reforms actually is to have sort of parallel tracks. And so you could imagine you have judges for the context in which human involvement is most important. And then for those less important things, you have a parallel track, such as through arbitration. And there are a lot of problems with arbitration about whether it's like actually people are consenting into it properly. but assuming that they are, you can have sort of these AI judges or using it as a way to make the process more efficient in contexts where you're less worried about the stakes and the stakes do seem lower. And so that might be a way to sort of, if there's a finite pool of human attention and human time that we can allocate to these tasks, we should allocate it in a way where it's most needed. So let me share a couple of thoughts, both specifically for judging, but also, like you said, zooming out, Alan. One thing I'd say is, even in some world where, you know, AI could make these decisions, what do we want the humans to be doing in that world? To me, you know, judges make law a lot of the time. And exercising human judgment about what we want the world to look like, that's the perfect example of what I would want humans to be doing in a world where all conceivable labor can be automated. I mean I think that should be literally the last thing to get automated again for for normative reasons regardless of whether it can or can be automated Well let me ask about that because I curious to sort of push on that intuition Where is that normative commitment coming from? I mean, because you can imagine a lot of arguments for it. You could say, well, it's because I think that humans will always do a better job, at least in some cases, right, in exercising that judgment than AIs do. In which case, I would say, we did earlier, however, kind of stipulate that at least in a lot of domains, AIs are getting quite good. And then you'd have to have, I think, a somewhat rosy view of how good human decision makers are, at least kind of the median human decision maker. Or you might say, I worry that if we outsource those decisions to AIs, we ourselves will kind of have an almost moral de-skilling, right? We will lose the capabilities. Or you might say, you know, human psychology just requires something carbon-based to pass judgment on me, and it'll just rebel and society will fall apart, right, if we have this done by computers. But I don't know. I'm curious to sort of push you on just to clarify where that intuition is coming from. Yeah, it's a simple answer. I think this is what it means to be in control of our own civilization. I mean, all those debates about AI safety, this is actually what it boils down to for me. Not killer Terminator robots. These kinds of moments where we put the course of humanity in the hands of machines, I think that's a line we should not cross. I mean, historically, look at any cases that had a significant impact on how society functions, let's say, Brown v. Board of Education. Is that something we would want to have been decided by a robot? I don't think it's a matter of accuracy. There are no accuracy standards by which you can claim that AI is doing a better or worse job than a human judge. It's purely a normative question. It does not have an empirical component. And, you know, I would say that a world in which we leave these kinds of decisions up to AI is not a world I want to live in, and hopefully the majority of people. And just to clarify, and it's not, it sounds like because you don't think AI could have written Brown versus Board of Education. It's just that the whole point was that humans decided. Like, it's sort of that the whole point of democracy, right? It's not that we come, not necessarily that we come to the best decisions, but they are fundamentally our decisions. Is that the intuition? I'm not pushing back on it, but I think it is useful to clarify where those intuitions come from. Yeah, this is what it means to have agency as a species. These are the biggest decisions that we make about the course of our societies. Okay, so we talked about the impediments to the broad diffusion of AI, the unauthorized practice of law issues, the adversarial dynamics, and just the need for humans, and therefore humans will be a bottleneck. Let's talk briefly about some of the reforms and the solutions that you all propose. But before we get into them, I do want to ask kind of a meta question. It sounds like, at least as I read the paper, you all do think that we should have more AI in legal service. I mean, a lot of these reforms are meant to facilitate the spread. But one could, I think, just as easily look at your analysis and say, oh, thank God we have these roadblocks, right? We want there to be these really strong unauthorized practice of law rules. I'm not sure anyone would say we want to have these bullshit jobs where people are just creating costs. But certainly, I mean, Arvind, I think you just very eloquently set out your argument for why humans being in the loop is just kind of a fundamental axiom of what it means to be human and have a human-led society. Why isn't the last third of your paper a thank God and we should do everything we can to keep AI out of the legal profession? Yeah, Arvind, why don't you start? Sure. I mean, I think there's going to have to be some line drawing exercise. I think in every profession, you know, people will argue about what kinds of aspects of what it means to do that job are fundamentally human and which ones can be delegated to a machine. Certainly in the legal profession and really any other profession, there are lots of things we've chosen to delegate and which we're comfortable with. I think, in a way, law is starting from a really good place because there are all of these restrictions currently. And so it's kind of opting in to AI, we have to choose to allow AI to be used for certain things. And it's not like by default, AI is going to replace judges and lawyers. So in a way, that's good. And so that is something that I celebrate. But I don't think the current equilibrium is the optimal one. And I think there are a lot of things that make sense to delegate. And just a simple example being access to justice for people who can't afford a lawyer being such a scarce thing. And if AI can be used to enable public defenders to be more productive, that would be a big win. So it's an existence proof that there are some tasks. Where that line is is not for me to say, but I don't think the current equilibrium is the optimal one. Well, let me ask the same question to Justin. And I think your perspective is particularly interesting here because, you know, you're a third year law student. You're graduating in a few months and you're entering a world, a legal profession that is radically changing. And I suspect you can see that even more than, you know, many of your classmates. Now, I think you've made a very clever bet in focusing on AI because even if legal profession goes away, there will always be, I think, demand for your expertise in thinking about AI in the legal profession. But I'm just curious from your perspective, I mean, I could imagine you as a student saying, I do not want AI interfering with my future job prospects. And so I'm curious where you come down on this point. Yeah, I actually think it takes us back to the beginning of the conversation where ultimately the AI as normal technology is a prescription about where we should focus. And I view sort of these reforms and these bottlenecks as opportunities of where we should be allocating our time and attention. because ultimately one thing I think you'll learn going through law school is just there's so much that needs to be fixed and there's so much like there are so many problems with our current legal system and I view AI as partially a way to fix those problems but also as a way to sort of push through or motivate the reforms that we've needed for a long time that aren't actually about AI like there's a lot of problems with our access to justice system that aren't really AI problems but maybe now that people are thinking critically about how should we redesign our system in light of AI, we can start having a better system generally. So let me end the conversation then with one of the reforms or kind of grouping some of the reforms under one category, which at least to me jumped out as the most interesting. And that is really changing these unauthorized practice of law rules. And so just talk me through what that would look like, and then also how you'd respond to some of the concerns that, well, the reason we have unauthorized practice of law is the same reason we have unauthorized practice of dentistry, right? We do want some consumer protections around this. And so why doesn't that mean that sort of the proposals that you all suggest, you know, the regulatory sandboxes that are in places like Utah, you should talk about what those are. Or your paper mentions Gillian Hatfield's idea of regulatory markets where you have sort of the government regulators certifying private regulators. And it's those private regulators and in a kind of market competitive sense, then regulate the individuals almost in a way that, you know, the government recognizes certain accreditation bodies. And those accreditation bodies then accredit schools. You know, all of this is very clever, but it ultimately is in the service of weakening unauthorized practice of law regulations. And someone might argue, I'm not sure I would argue that because it worries, I worry that it's just guilt capture. One might argue that not all is just making, you know, customers of legal services more vulnerable. So let me start with Justin. And then I'm curious, Arvind, actually, how you think about those issues more broadly, because as I mentioned, very similar issues can come up in other professions and even in nonprofessional settings like software engineering. Yeah, I think ultimately, if the purpose of unauthorized practice of law regulations sort of in their strongest form is to protect consumers from sort of unethical practitioners that are giving bad legal service, I just am not very compelled that that they're actually doing that good of a job of it right now. it seems to be making things much more expensive. And there's some people who've passed the bar who give horrible legal services. And then if you think about in the debt collection context, 70% of people are losing by default because they didn't actually respond to the lawsuit because they didn't afford a lawyer. If you look at some of the most consequential, sort of legally relevant decisions in our life, like whether you're getting divorced or whether you're getting evicted, people just don't really have access to lawyers. And so I just don't think that UPL rules are serving their intended purpose right now. And so that's sort of why I'm for changing them in some way. I'm trying to say something useful by comparing it to other domains like software engineering without making it sound like, you know, I'm giving advice to lawyers on how to run their profession, because that's not for me to say. Okay, so let me say this. Certainly, it's easy to understand the motivation behind unauthorized practice of law rules. But I think, as Justin said, they're currently not that great at serving their intended function. And they seem to have all of these unintended consequences, guilt capture, as you might put it, that are deeply problematic. I think there could be other ways of ensuring that consumers are not harmed. It's for me as a non-lawyer, not a legal scholar, it's not really for me to say what those are. But maybe this moment of upheaval around AI is a time when we can have a lot of innovation around the way we regulate different professions and what institutional and organizational structures we put in place. software engineering is one example of a field, like I was saying earlier, is not professionalized, but still has a mix of various kinds of checks in place to ensure that horrible outcomes don't result. So maybe there's something to look at from different fields. Maybe we don't have to put all of our weight into unauthorized practice of law rules. I think it's a good place to leave it. Arvind, Justin, thanks for coming on the show and for writing a great paper. It's very interesting. And I do hope that both optimists and skeptics of AI and the legal profession get a chance to read it. Thank you. Thank you, Alan. This has been really fun. The Lawfare Podcast is produced by the Lawfare Institute. If you want to support the show and listen ad-free, you can become a Lawfare material supporter at lawfaremedia.org support. Supporters also get access to special events and other bonus content we don't share anywhere else. If you enjoy the podcast, please rate and review us wherever you listen. It really does help. And be sure to check out our other shows, including Rational Security, Scaling Laws, Allies, The Aftermath, and Escalation, our latest Lawfare Presents podcast series about the war in Ukraine. You can also find all of our written work at lawfaremedia.org. The podcast is edited by Jen Patya, with audio engineering by me. Our theme song is from Alibi Music. And as always, thanks for listening. you