AI & The Law: Changing Practice, Claude Constitution, & New Rights, w/ Kevin & Alan of Scaling Laws
Legal scholars Kevin Fraser and Alan Rosenstein discuss how AI is transforming both the practice of law and legal regulation. They explore how frontier AI models already outperform median lawyers in raw intellectual capability, the potential for AI to democratize legal services in underserved areas, and future possibilities like outcome-based legislation and automated compliance.
- Frontier AI models are already better than median lawyers in raw intellectual horsepower, but adoption remains limited due to billable hour incentives and guild protections
- Legal services may follow Jevons Paradox - as AI makes legal work cheaper, demand could dramatically increase, potentially creating more rather than fewer opportunities for lawyers
- The future of law may shift from procedural complexity to outcome-based systems where AI agents negotiate complete contingent contracts and simulate legislative effects
- AI could enable both democratization of legal services for underserved populations and dangerous concentration of executive power through perfect surveillance and enforcement
- The question of AI sentience and welfare rights may become a major source of social conflict as people develop deeper relationships with AI companions
"Frontier models are already better than the median lawyer. There's no question about that. At least in whatever kind of raw intellectual horsepower equivalent you would be."
"The practice of law is fundamentally a cognitive activity and observing that frontier models are already better than the median lawyer, at least in terms of raw intellectual horsepower."
"If you get paid by the hour, then your incentive as an attorney is to bill as many hours as possible. And so I think there are a lot of firms who are just used to that model and scared about bucking that trend."
"Future generations are going to look back at the level of sophisticated AI tools we had available right now and are going to be flummoxed that we weren't asking our legislators to run proposed laws through simulations."
"What sorts of rights will people demand for those models, I think is something that could cause real societal cleavages."
Hello and welcome back to the Cognitive Revolution.
0:00
Today my guests are Kevin Fraser, Senior Fellow at the Abundance Institute and Director of the AI Innovation and Law Program at the University of Texas School of Law, and Alan Rosenstein, Associate professor of Law at the University of Minnesota. Together they host the Scaling Laws podcast, which has become a go to resource for tracking the impact that AI technology is beginning to have on our otherwise slowly evolving legal system. In the first part of the conversation we focus on how AI is affecting the legal profession. While lawyers are more insulated from change than most professions thanks to their unique ability to write licensing laws and implement other guild style protections, Alan is clear eyed, noting that the practice of law is fundamentally a cognitive activity and observing that frontier models are already better than the median lawyer, at least in terms of raw intellectual horsepower. And yet, while 70% of top law firms have already licensed tools like Harvey, Kevin says that day to day usage remains surprisingly low, in part because the billable hour compensation structure disincentivizes efficiency, some secret cyborgs are quietly using AI to outperform their peers, and firms are beginning to whisper about hiring fewer junior associates. But aggregate impact so far is limited and whether we'll see large scale displacement of human lawyers or a dramatic expansion of legal services provided by human AI teams remains highly uncertain. Because though it is clear that many people are underserved by the legal profession today, it is not at all clear exactly how much more legal services people would want to buy even if prices were dramatically reduced later on. We zoom out to consider bigger and more speculative ideas, including what maximalist legal services might actually look like, starting with Alan's idea of using AI to develop complete contingent contracts which would attempt to address every possible scenario before signing where AI should sit relative to humans on the spectrum between strict formalism and legal realism and how the new Claude Constitution represents a virtue ethics based approach that prioritizes contextual judgment and high level principles over detailed rules how AI could reshape the legislative process, including Kevin's vision for outcome oriented law where we first define what we actually want new laws to do and then use AI to run simulations before passing bills Allen's concept of the unitary artificial executive and the risks associated with the possibility that AI could enable granular real time control over the entire federal bureaucracy, what new rights we as individuals should have in light of AI technology, including the right to compute which has already been enacted in Montana and is being considered in other states, and the right to share one's personal data, which today is often frustrated by well intentioned but outdated privacy frameworks, what new restrictions we should place on the government, such as limits on mass surveillance of public spaces, and finally, how questions of AI sentience and welfare might become a source of social conflict as people become more and more attached to AI Personas. Kevin and Alan are skilled conversationalists and serious scholars, and I think you'll agree that this episode is simultaneously educational, thought provoking, and fun. So I encourage you to join me in subscribing to Scaling Laws to keep up with everything going on at the intersection of AI and the law. And I hope you enjoy this conversation with Kevin Frazier and Alan Rosenstein, Kevin.
0:03
Fraser, Senior Fellow at the Abundance Institute and Director of the AI Innovation and Law Program at the University of Texas School of Law, and Ellen Rosenstein, Associate professor at the University of Minnesota Law School and Senior Editor at lawfare. Together, you guys are the creators and co hosts of the podcast Scaling Laws. Welcome to the Cognitive Revolution.
3:35
Thanks for having us, Nathan. Glad to be here.
3:54
Thanks for having us.
3:55
Yeah, I'm really excited for this conversation that we've got a lot of ground to cover. I'm interested in just, you know, always trying to patch my blind spots on the AI landscape in my kind of AI scouting mission, you know, that I always appreciate a chance to do that. So given the fact that you guys are both law professors and scholars and, you know, studying AI and law in the intersection of those two so deeply, I want to take the chance to kind of get a survey from you in terms of what is going on at the intersection of AI and law. I listened to your recent episode on the new Claude Constitution, and certainly that's really interesting. There's a paper that you shared with me on automated compliance, which is a phrase I had not heard but before, and think that's really a fascinating concept. And who knows what else new social contracts we might imagine explore together as well. So maybe for starters, what's going on at the intersection of AI and law?
3:57
I mean, I'd say it's a big traffic jam at this point or a huge crash because we have systems that were largely constructed in the 1960s, if not before. In the 1970s, a lot of the core privacy principles, for example, emerged from the Fair Fair Information Privacy Principles. I always get them wrong wrong because we just refer to them as the fips. But you've got fips from the 1970s, you've got case law from well before. That all tries to spell out what rights and obligations we have in a analog world, and we already saw those being pressure tested during the Internet era. And as we all know, AI is kind of just putting all of that on steroids. And so when it comes to trying to see how prior legal regimes fit into this new world of AI, it makes for a lot of rich scholarships. So, thankfully, Alan and I have plenty of excuses to continue to write law review articles, although his are always way better than mine.
4:54
That's not true, but I'm not sure anyone wants to read any law review articles, whether they're Even if they're good. Yeah, I would say I might back up a little bit the. I agree with everything Kevin said. So I think the two different intersections of law and AI, and if you're in a law school, those law schools that have AI classes, and an increasing number of them do, and I think within a year or two, all of them will. They're actually two different classes because there's the law of AI and then there's AI and the law. And those are actually very different things, because on the one hand, there's all the stuff Kevin was talking about, which is AI is a new social, economic technology movement, maybe the most important thing since Fire. But even if you don't think that, probably, I think at this point, everyone agrees, at least at the level of the Internet, Right. And so there are all these legal questions that come up, and how do you regulate it and how do you promote it and how do you control it, et cetera, et cetera. At the same time, there's a whole separate set of conversations that have some overlap, but are actually pretty orthogonal to that which is law is just a cognitive discipline. It's not quite as pure of a cognitive discipline as, let's say, computer programming is, because there are still areas in which the law expects there to be actual human beings. Whereas if tomorrow all computer programmers uploaded their consciousness into the cloud, you could imagine a world like a computer program would just do fine with AI, with law, rather, you still need people to go into courtrooms. But a huge amount of law is purely cognitive. And so there's no reason to think that the same revolution that AI is currently having in computer programming, which is the manipulation of certain kinds of symbols, will not also apply and is not already applying to the law, which is also the manipulation of certain kinds of symbols. It's true that I think the law is somewhat behind where, let's say computer programming is, but it's like a year behind or maybe two years behind. It's not 30 years behind, just as software engineering has been completely transformed in the last year and Obviously I've listened to a bunch of your podcasts you got. You go into it to this much more than we do, but we talk about it somewhat and it's. As a really crappy hobbyist programmer myself for many years, just because it's fun and I think of it as the sort of adult approved way of playing video games. As a 39 year old father of two, it's hard for me to justify playing video games. But if I'm 5 coding, I can convince my wife that's a good use of an evening for me, although it totally scratches the exact same itch in my brain. Just as AI is totally revolutionizing computer programming, it is in the process of totally revolutionizing the law. I think it's going to take longer and we can talk about it if you want, because the law is a kind of professional guild and lawyers are the one guild that because they're lawyers, they control the rules about who can be a lawyer. Right. And so it'll all take longer, but that's like another whole vector. Right? And I think we should all care about that because all jokes about lawyers aside, law is still one of the fundamental technologies of modern society. You want to think of it that way. It's one of the main infrastructures.
5:57
Okay, so two big areas that you outlined there, one being basically policy with respect to AI, and the other being the impact that AI is making on the practice of law as it's happening today. In just preparing for this, I was looking at what measures do we have to try to get a handle on how good the AIs are getting? And I guess in general I've been surprised across the board by how far the AIs have made it up the sort of value or performance ladder as measured by something like gdp. Val where I went and saw that currently in the lawyers category, there's not that many prompts, at least in the public data set. But Claude Opus 4 or 5 is currently the top performer. It is winning one in three head to head comparisons versus human lawyers. And it's winning or tying 70%. That's like you obviously made it pretty far. You guys can probably unpack that more qualitatively and tell me what it's good at, what it's bad at, where people are having success and not. But it's been striking to me, and this I would say is true in medicine too, that there hasn't been nearly as much guild closing of ranks as I would have expected two and a half years ago. And I don't understand why? Maybe it's because people are ignorant about how far things have come and they are living in denial as opposed to making the moves that they might one day wish they had made if they had properly appreciated the phenomenon. But I guess how would you characterize just like how good at law frontier models have become and how much do most lawyers today appreciate that? And why isn't there more of a response so far?
8:49
Yeah, I think I'm curious what Kevin thinks. I think they're extremely good. Obviously they're still held back by mistakes, hallucinations. They don't necessarily have access to sort of all the databases that you would need to give a full legal answer, especially if the questions are obscure and require like you to have read that one random SEC regulation that's buried in the Federal Register. These are obviously all fairly trivially solvable problems and they will be solved in the next few years. But in terms of pure horsepower, they're quite good. Some are better than others. In my kind of testing the amount of money spend on all of these models a month, horrifying. But I feel like it is kind of part of my professional obligation to get a sense. So I find them differently. I think right now I have found that although CLAUDE is my daily driver and I mostly live within CLAUDE code, I find that calling out to 5.2 to ChatGPT 5.2 and then especially using like the Pro Extended Think model, which is these names are so confusing, which I think you can only get on the web interface because in, in Codex Cli, there's the X. The whole thing's a mess. I think, I think that whatever like special and I think all of the labs like are spending a lot of money on their sort of customers RLHF environments and they're obviously focusing on different things. I think OpenAI, my sense is, has focused the most on law. And so I, from like a vibes perspective, I think its legal taste is the best. But right now all three will give you pretty good answers. And I, in my scholarship and my writing am constantly talking to these models, having them pressure test my legal analysis. So I'd say already these models are certainly better than the median lawyer. There's no question about that. At least in whatever kind of raw intellectual horsepower equivalent you would be. I see no reason to think why in a few years they won't be vastly superior. There will still always probably be the question of bespoke taste. If you're a super experienced Supreme Court advocate who has done 50 presentations before the justices, that's hard to RLHF. But the vast majority of legal work, just as the vast majority of programming work, like the vast majority of medical work, is pattern matching across fairly standardized contexts now. So I think it's over. Right? There's no question about this anymore. And I will agree with you that there's actually been a lot less pushback on this than I would have thought. A piece that Kevin and I are currently writing, a lot of your article is actually about the use of AI in legal scholarship. And again, I'm curious, Kevin, your experience. But as I have presented that piece to faculties across the country, I was expecting a lot of tomatoes being thrown and a lot of people saying, oh, but they're just fancy autocompletes and they can't be creative. There's honestly a lot less of that than I would have thought. And I think because if you spend an hour talking to any of these models at the $20 a month plan, you just realize if all they're doing is fancy autocomplete, then all I'm doing is fancy autocomplete. Why there's not been as much resistance, first of all, I think there will be. I think you still. The vast majority of lawyers are still. They're not tech sav. They're interested in this. They haven't really experienced this. And so I think that there will be a lot of resistance. But for those lawyers that have experienced this, I think they're making a bet. This is certainly the bet that I'm making, that there will be a kind of Jevons paradox of as legal services get cheaper, we will want more of them and lawyers will move up the value chain. And so although it will be messy and although some lawyers will do very badly if they can't, if they can't react in time, in 10 or 15 or 20 years, there's going to be, at the very least, as many lawyers as there are today, at least as much demand for legal services and frankly, probably much more. Whether that's true is like the question, right? That is whether Jevons Paradox is going to hold and across which economic domains is like the question about AI in the economy. But I think given how important law is, and given how much, I think less law there is than there could be and probably should be in a very sophisticated rule of law country. My money is on Jevons Paradox holding.
10:29
You're kind to call our country a sophisticated rule of law country.
14:41
And dude, I'm trying, Dude, I'm calling you from Minnesota. I'm trying so hard. I'm trying so hard to stay optimistic right now. I'm taking the long view this will all be over at some point.
14:45
Yeah, let's hope so.
14:56
Hey, we'll continue our interview in a.
14:57
Moment after a word from our sponsors.
14:59
Want to accelerate software development by 500% meet Blitzi, the only autonomous code generation platform with infinite code context purpose built for large, complex enterprise scale code bases. While other AI coding tools provide snippets of code and struggle with context, Blitzi ingests millions of lines of code and orchestrates thousands of agents that reason for hours to map every line level dependency with a complete contextual understanding of your code base. Blitzi is ready to be deployed at the beginning of every sprint, creating a bespoke agent plan and then autonomously generating enterprise grade premium quality code grounded in a deep understanding of your existing code base, services and standards. Blitzi's orchestration layer of cooperative agents thinks for hours to days autonomously planning, building, improving and validating code. It executes spec and test driven development done at the speed of compute. The platform completes more than 80% of the work autonomously, typically weeks to months of work, while providing a clear action plan for the remaining human development used for both large scale feature additions and modernization work. Blitzi is the secret weapon for Fortune 500 companies globally, unlocking 5x engineering velocity and delivering months of engineering work in a matter of days. You can hear directly about blitzi from other Fortune 500 ctos on the modern CTO or CIO classified podcasts, or meet directly with the Blitzi team by visiting blitzi.com that's B L I T Z Y.com schedule a meeting with their AI Solutions consultants to discuss enabling an AI native SDLC in your organization. Today, your IT team wastes half their day on repetitive tickets and the more your business grows, the more requests pile up. Password resets, access requests, onboarding all pulling them away from meaningful work. With Servl, you can cut help desk tickets by more than 50% while legacy players are bolting AI onto decades old systems. SERVL was built for AI agents from the ground up. Your IT team describes what they need in plain English and Serval AI generates production ready automations instantly. Here's the transformation A manager onboards a new hire. The old process takes hours pinging Slack, emailing it, waiting on approvals. New hires sit around for days. With Serval, the manager asks to onboard someone in Slack and the AI provisions access to everything automatically in seconds with the necessary approvals. It never touches it. Many Companies automate over 50% of tickets immediately after setup, and Serval guarantees 50% help desk automation by week four of your free pilot. As someone who does AI consulting for a number of different companies, I've seen firsthand how painful manual provisioning can be. It often takes a week or more before I can start actual work. If only the companies I work with were using Serval, I'd be productive from day one. Serval powers the fastest growing companies in the world like Perplexity, Verkada, Merkor and Clay. So get your team out of the help desk and back to the work they enjoy. Book your free pilot@serval.com cognitive that's S E-R V A L.com cognitive let's unpack.
15:02
That latent demand concept. I have no idea for law, but the way I think about this, and you can tell me if you think about it a different way than how you apply it to law specifically, is on a spectrum from dentistry on the one hand, to possibly like software creation on the other hand, which is certainly, if not the most extreme. It's one that's being tested maybe in the most extreme way right now. Dentistry. I want zero dentistry services for the rest of my life if I can possibly maintain that. And whatever I have to have, I'll get. But I'll I won't be opting into any dentistry just for fun, right? Like it's I'm going to buy the minimum that's required for me to have a good life. Accounting, I put on that end of the spectrum too, where I'm like and accountants maybe will have a different argument, but I'm I will buy the minimum accounting that I need to buy to be compliant and to know what's going on. And then beyond that, I'm not really looking for more. If you could give me 10 times the accounting for the same price versus the same amount of accounting at a tenth the price, I know which one I would pick and I would pick the savings. Computer programming. On the other hand, there's a lot of optimism that, hey, maybe we do have latent demand for 10 times or 100 times as much software and everything will be bespoke and whatever and we can imagine a whole new software abundance paradigm. I guess for me, as like somebody who's a relatively simple person and has a relatively uncomplicated life, my intuition is that law would fall more on the accounting side. Like I do find so often that AI is a GDP destroyer in the sense that when I for example, last went through a little Contract negotiation wasn't anything super complicated, but I just took what I got to a couple language models, asked what I should be concerned about, shared my take, and we iterated through it. And I didn't have to hire an attorney, obviously. What do you what is the if there's going to be 10 times more legal services provided at the same cost, like what are we not doing today that you would imagine us doing in the future?
18:29
I think it's important for non lawyers to understand that we have a whole concept in the field of law referred to as legal deserts, which are areas in the country that have about one lawyer for every 1,000 residents. And so there's a whole lot of folks who just have no one to turn to when it comes to signing that lease, forming that small business, starting a nonprofit, getting out of that marriage, so on and so forth. There's maybe one person with a single shingle and waiting for any clients that walk down Main street to try their best to help them out with a legal dispute. But they're often not a specialist or they often charge too high fees. And so I think there's a tremendous amount of latent demand just for better, higher quality, faster lawyerly services that suddenly we're going to see a lot of lawyers be able to provide across the US and that to me is incredibly optimistic. Because if you look, for example, at landlord tenant disputes, there have been some trials where if you just provide a little bit of legal counsel, for example, to a tenant, they have a much higher rate of doing well in that dispute than absent having some degree of legal counsel. So I would say there's a tremendous amount of latent demand. The other thing I'll add is that lawyers often like to refer to themselves as counselors, and not in the way of being like a therapist or something like that, but with some degree of. We want to provide wisdom and judgment and foresight about how you're going to operate your business in this new legal domain or how you should begin to think about legal architectures more broadly. And that's where I think we'll have a kind of new track of legal education. I see a sort of bifurcation happening in the legal industry where we're going to have the folks who do hang up that single shingle and they go represent folks in landlord tenant disputes and take care of the rote tasks that lawyers need to do. But that AI will take a big chunk of work from. And then I see a track that I like to refer to, and I didn't coin this But I legal architects, and they're operating at a bit of a higher, more abstract level, trying to analyze how should systems of law, how should our regulatory structure even begin to work and operate? And that's where I see a huge room for creativity and new training and a new sort of lawyering where, for example, we have folks like Jillian Hatfield, who's done work with Fathom Glenn. Yes, yeah. And Andrew Friedman there, thinking about novel approaches to regulatory design. I am so excited about that sort of work and really think that's going to be a new frontier of legal education that we should embrace and try to foster. So I'm not worried about my students having job opportunities, for example, but I will say for the schools that are falling behind AI adoption, that's tremendously concerning to me because to just touch briefly on the last question, there's still a number. I think Alan just gets invited to better law schools than I do. When he talks about our paper. I've had to dodge a tomato or two, figurative tomatoes, from faculty who just don't want to hear about AI or want to make sure that it's not a part of certain courses or that it's not introduced until their later years. The reality, though, is that kids in high school are using AI, if not well before that. And so by the time they come to law school, this is something we just have to adjust to and acclimate to so that they can succeed when they go into a law firm. So we have a huge obligation as a legal education industry to make sure we're thinking about that future of law and preparing students for being successful in that domain.
20:34
Yeah. The thing that I agree with everything Kevin said, what I would add to that is, you know, on the point of latent demand, in addition to the fact that there's actually a lot of stuff that a lot of people who are not getting legal services. I think, again, there's a popular sense that there's too much law and it's too litigious of a society. And in some domains that's absolutely true. But that's not an across the board thing.
24:28
Right.
24:49
There's so many people that can't get wills or divorces or whatever the case is. And even so many of us. Right. How many things, how many interactions have you had, for example, sort of in your business dealings that you kind of did as an email, because actually writing a contract was just too much of a pain in the ass. I mean, I certainly have done so in the law when you take contracts, which is like your standard Kind of one l course. And I think it's actually, in some ways maybe the most foundational legal course there is because it's about fundamentally the question of being precise in agreements, which is kind of ultimately what the law is meant to facilitate. There's this concept of the. I think it's called the complete contingent contract. And that's the idea that if you and your business counterparty had infinite time and infinite energy and zero opportunity costs, your contract would be not infinitely long, but almost infinitely long, because you would go through and you'd figure out, right, what do I think about every single possible eventuality and how do I negotiate that to make like a win win situation with my counterparty across every possible contingency, Right? And you can put in some economic theory and determine that if you could do that, that would be socially optimal, et cetera, et cetera, that'd be great. But of course, no one does that because you can't do that. And so the law has all these default rules, which are fine, but they're default rules, which means they misfire a bunch. Now, imagine a world in which we each have our own very sophisticated agent. And when I want to engage with someone in any kind of transaction, my agents can go and have a conversation right, at the speed of whatever inference speeds are at the speed of 400 tokens a second, and they can come to an agreement. You're going to have orders of more magnitude, orders of magnitude more legal demand there in a way that could actually be quite beneficial to society. Now, I don't know if you get more lawyers in the end, but it's not obvious you get fewer lawyers. The other thing I would add is it's true that you don't want any more dentistry than you need, but law is a little different because law is a more competitive activity, right? You have a counterparty on the other side who is looking out for their own interests in a way that, like, you and your teeth are, like, fundamentally on the same side. So once you get enough dental care, like you got enough dental care, law doesn't quite work that way. Because no matter how good your legal services are, if the other guy thinks that they can get better legal services, then they'll do that. And so you need. So there are these arms race dynamics, which is, again, why I'm not saying that there's an infinite demand for legal services, but I think there's a. It's a pretty big one.
24:49
And just to build on that really quickly, because in addition to thinking about improving the basics of law, like contracts. Nathan, in our kind of pre recording session when we were all just hanging out, we were talking about what does AI in the future of governance look like? And one thing that I've been shocked by is the more you dig into laws and the more you realize what technology is capable and what AI is capable of, you realize our laws really suck. We're writing laws in the same way, with the same degree of expectations and in the same format as we would have seen centuries ago, right? And yet to Alan's point, and as we were discussing, Nathan, we can use AI, for example, to create new triggers for hey, if for example, the unemployment rate goes to 7% in this field, then we want to see this new economic policy or or if we see tariffs are imposed by this country, then we want to automatically see this response. There's so much room for smarter legislation that we're not even scraping the surface of. And that, to me is another exciting field that lawyers haven't really in earnest began to explore. Professor Hatfield is obviously leading the way in that regard, but we need a lot of little Jillians hanging around and going and emulating that study of what does the future of law look like.
27:18
Hey, we'll continue our interview in a.
28:42
Moment after a word from our sponsors.
28:44
Your IT team wastes half their day on repetitive tickets, password resets, access requests, onboarding, all pulling them away from meaningful work. With Servl, you can cut help desk tickets by more than 50% while legacy players are bolting AI onto decades old systems. Servl allows your IT team to describe what they need in plain English and then writes automations in seconds. As someone who does AI consulting for a number of different companies, I've seen firsthand how painful and costly manual provisioning can be. It often takes a week or more before I can start actual work. If only the companies I work with were using Serval, I'd be productive from day one. Serval powers the fastest growing companies in the world like Perplexity, Verkada, Merkor and Klay, and Servl guarantees 50% help desk automation by week four of your free pilot. So get your team out of the help desk and back to the work they enjoy. Book your free pilot@serval.com cognitive that's S E R V A L.com cognitive the worst thing about automation is how often it breaks. You build a structured workflow, carefully map every field from step to step, and it works in testing. But when real data hits or something unexpected happens, the whole thing fails. What started as A time saver is now a fire you have to put out. Tasklit is different. It's an AI agent that runs 24 7. Just describe what you want in plain English. Send a daily briefing, triage support emails or update your CRM. And whatever it is, Tasklit figures out how to make it happen. Tasklit connects to more than 3,000 business tools out of the box, plus any API or MCP server. It can even use a computer to handle anything that can't be done programmatically. Unlike ChatGPT, Tasklit actually does the work for you. And unlike traditional automation software, it just works. No flowcharts, no tedious setup, no knowledge silos where only one person understands how it works. Listen to my full interview with tasklet founder and CEO Andrew Lee. Try Tasklit for free at Tasklet AI and use code COGREV to get 50% off your first month of any paid plan. That's code COGREVasklet AI.
28:46
So maybe let's work our way up the value levels there, I guess for starters, one that I skipped over and I wonder if there. I don't know if there's like data around this yet or maybe anecdote at this point, but again, in the programming field you do have companies starting to say like anthropic, I think is being most vocal about this, arguably being most forthright about this right now when they're saying we're not really looking to hire junior employees in really any department anymore. And I think in the broad space of software it's, man, I don't know if I had a senior architect and I could have them mentor a junior programmer or get another $200 a month Claude Max plan, which is going to give me better ROI narrowly for the purpose of like my project. Obviously there's broader questions of the generalizing that strategy and what happens to society broadly, which I'm not ignoring. But locally it seems like it's pretty clear that you're going to get more from another Claude code than you would from like a kid who came out of a undergrad CS program that was like all in Java anyway or whatever. There's so many disconnects there that you're like trying to bridge that cloud code doesn't have. Doesn't bring those problems to the table. Is that true at like the paralegal level? I used to read as a kid, I read John Grisham books and I remember so much of the stories were like these sort of heroic, like sort of Herculean labors of just especially these underdog individual lawyers fighting one versus these large teams just reading till their eyes bled these repositories of documents. And that seems like the probably the first thing that would be like dramatically disrupted by AI. Are we seeing that or is there a already like a revolution in like discovery or I don't know even know what the full list of what paralegals do would be. But are we seeing that like majorly changed already?
31:07
Yeah, I would say that we're already seeing some industry shifts occur. Fortunately, I get to bring a lot of practicing lawyers to campus here in Austin and probe them about how they're using AI. And I'm not going to name firms, but I've asked, hey, if I came to you with the number one graduating student from Harvard, but they had no AI experience, and then I came to you with a AI whiz from a middle ranked law school, who would you hire? And now I hear more and more I would take that middle tier person who's savvy with AI tools because I want them to be on the frontier of finding new tools and teaching everyone else how to use it. One of the unfortunate things about the legal industry is we love a good symbolic technological adoption. I think 70% of US law firms, for example, are of major top 100 law firms are using Harvey. According to Harvey's own stats. Harvey, for folks who aren't in the lawyerly weeds, is basically a souped up version of ChatGPT that's meant to assist specifically with litigation workflows. Yet when I go talk to folks who work at firms with Harvey and I ask, okay, what training have you received? And they say, oh, there was some email we got when it was initially introduced, but I haven't checked it out since. And then, okay, are you expected to use it at all? No, there's really no obligation for us to check it out or to use it in any new fashion. The underlying incentive of practicing attorneys is to spend as much time as possible on any given task within the band that's acceptable to your client because we have the billable hour. If you get paid by the hour, then your incentive as an attorney is to bill as many hours as possible. And so I think there are a lot of firms who are just used to that model and scared about bucking that trend, bucking what they know has worked. And so a lot of firms are not necessarily leaning into AI. So I will say that the rate of, let's say, entry level lawyerly jobs disappearing, I haven't seen a huge amount of shrinkage. But I do start to hear whispers now of firms saying, we're just not sure we're going to bring on as many summer associates this year, or perhaps we don't need to hire as many junior associates going into the future. And we're also hearing reports of, to coin Ethan Mollick's phrase, a lot of secret cyborgs in law firms these days. The ones who actually are AI savvy, aren't telling their, their superiors about how sophisticated and how many use cases AI can actually address. So it's a really dynamic time in the space.
33:01
Yeah. So I'm less plugged in, I think, than Kevin is to legal practice. So if he's hearing that there are whispers around this, then, then I believe him. I guess I'm a little, I'm a little skeptical that this is happening already. I think the data about whether this is happening in the software engineering field is actually still quite unsettled. And there's kind of a lot of debates over are these big companies actually using AI to not hire people, or they're using AI as an excuse for downsizing they've already wanted to do. Again, law is several years behind on the capability scale, and it's actually several years behind even that on the implementing it throughout, both because again, law firms, and this is part of the guild Rules of law, can only be owned and operated by lawyers. And lawyers, God bless them, are not like brilliant business managers generally. And so driving managerial change is a hard thing to do. Also, again, there are these legal practice rules around. Well, you need a human being showing up in court, and that human being has to attest that they checked everything. And if God forbid, your AI hallucinated, it's going to be very bad for you in front of the judge. So I think there are a lot of reasons to not be worried right now in the next couple of years. In the longer term. The question is, of course, how strong is Jevons Paradox? It just all again comes back to this question of induced demand, and we're just not sure what the answer is. I think the more interesting question, or I think the question where we can have more, where it's clear what's going to happen is that a lot of the entry level jobs will just have to go away. And if they're entry level people, they'll be having very different jobs. Because again, to your point, Nathan, a lot of entry level lawyering is very rote work. It's doing a ton of discovery, it's finding needles in haystacks, it's Writing a contract based on the thousand contracts your firm has done before in this practice domain. And that's just stuff that all already today's technology is going to be so good at. And so the question is, and we just know the answer to this question, is that work necessary on the way to becoming a really good lawyer? And the answer is we don't know the answer to that question. And actually, let me give an example from software engineering that I think about all the time when I try to think through this question about cognitive deskilling, which is a fancy way of saying getting dumber, which to me I think is actually the much more than job loss, the big concern for me in these AI tools in knowledge fields. And that's actually what happened in computer programming, right? So like in the beginning, right? You know, if you've seen like the Imitation Game, the movie about Alan Turing at Bletchley park, there was no computer programming per se. There were like machines. And then you would literally with the hardware switches, that's how you quote unquote, program the machine. And then someone decided, well, it should be really nice if we did it in zeros and ones. And then someone invented assembly lines language, which at the time was basically considered cheating. Now it's insane to think that assembly language was the easy option. And then at some point someone decided to invent the early programming languages and those were really considered cheating. And people thought, oh my God, if you can't program in assembly language, you're just not a real programmer, you're just like a moron. Every 10 or 15 or 20 years in computer programming, there's a new level of abstraction that is developed because after that people decided, well, it'd be really nice to have something that does garbage collecting worry about memory management. And it'd be really nice to maybe we should like the Java virtual machine so that you could write once and compile on all the systems. And then let's just have Python so you can write in pseudocode. Every once in a while you have this level of abstraction that in some sense makes the task of programming less cognitively demanding in certain respects. And so you could worry, well, that leads to cognitive skilling. It turns out that the scope of programming problems is essentially infinite. And for most people, programming doesn't become easier exactly. It's just that they operate at a different level of abstraction and you still have to be pretty smart to do it. We're having this current debate here about whether this new programming language, which is to say natural language, prompting of Claude is going to have that same effect. My sense is that you're still going to need to be really smart to do this. You're going to have to remember less syntax, but suddenly at a much earlier age, you're going to be thinking about architectural questions that 30 years ago it would have taken you 15 years to graduate into, because you would have spent those first 15 years remembering what the syntax or in curly braces were in your programming language. So the question is, and again, we don't know, but the question is, will that similarly translate to law? Will that similarly translate to medicine, where maybe you just don't have to do organic chemistry anymore? Because I don't know. You don't because like, just, just as you don't need to do long division once calculators come on, maybe you don't have to do organic chemistry once the AI tools are sophisticated enough to do that. Does that make incoming doctors dumber in a certain sense because they don't have to study organic chemistry? Maybe. But now they can spend their IQ points on more interesting higher level diagnostic questions. I don't know the answer to that question, but certainly in my own practice, such as it were. And I'm not a, I'm not a practicing lawyer, but I'm a law professor. I'm finding, for example, that I'm using student RAs a lot less than I would have even a few years ago because a lot of the tasks that I've had a student ra do, which was, hey, spent like 10 hours clicking around reading like 100 larvu articles and figuring out which three of them are useful. I can just kind of have like, I have a little script that I wrote that, you know, downloads a bunch of PDFs, sends them all to Gemini, right? Gemini Flash summarizes them, and then a combination of Gemini Pro and you know, the Claude API will have a little debate about whether or not the larva article is useful for my purposes. And then I get like a beautiful formatted, like markdown document again. Maybe that'll be solved and I can figure out a different use for my students. But if I can't, that will be a problem because many professions have an apprenticeship phase. One of the reasons I became a law professor was that when I was in law school, I was an RA for just a really wonderful law professor. And I did like nonsense crap work for him that like, I'm not even sure added value to his life. But I just like hung around him for long enough that I learned something by being a law professor. It became Something that I was interested in doing. If the next generation doesn't have that opportunity, that is a problem. And so that's why I think even if in the long term I am optimistic because I do think Jevons paradox tends to work for intellectual work. In the short term, I think get really, really messy. Which is why I think the people who are really going to struggle are kind of low agency people, for lack of a better term, people who expect that there is a way that you do things that you, you know, go through the appropriate hoops that you just grind. I think what AI does is it is, it is incredible opportunity for people, but it does require a higher level of agency. And you know, I think if you listen to like what Tyler Cowen, how he's thought about the implications of AI in the labor market, I think that's one of his main, main themes, right. The kind of averages over theme of a lot of his work I think applies. And you know, in the long term, I think that's great for society. I think that's. You make more value that way by empowering high agency people. But it sucks for the people who aren't so high agency in the mean term because they get left behind. And from their perspective, it's a big betrayal, right, because they did all the right things and the rug was pulled out from under them. Which is where I think a lot of the. So I'm rambling at this point, so I'll stop. But I think this is where a lot of the kind of political frictions around this technology going to see that in the next 10 years.
35:54
I think that point about the fact that a certain class of people, I mean, I think we've already seen this in the last, I don't know how many years with sort of the fact that so many kids are coming out of college and can't get a job that really allows them to pay off their student debt in any sort of reasonable way. The general sense that like I did what I was told to do, I played by the rules, and somehow I'm still getting screwed. Like when that hits a certain level.
43:08
And therefore we should burn the entire.
43:38
System down, it's a tough thing for people to stomach. And yeah, I mean that, that pressure, I don't think burning the whole system down is necessarily the right answer, but one, I'm at least like quite sympathetic to those folks. And I'm also like not unmoved by the idea that like this is a high class problem. But increasingly I'm also like, yeah, it's not that high class of a problem and a society has to take care of the big middle class, for lack of a better term, that isn't going to be an outlier relative to the system, but is going to do what the system expects them to do. If that can't work anymore, then you've got a big problem. Things do start to come apart potentially pretty quickly. So going back for one second to this kind of like, legal desert concept and Alan's initial comment that, like, the frontier models are better than the. I think you said like, median lawyer or average lawyer practicing today. And I think that totally checks out, although I don't have that data. From my own personal experience, I can say in the context of pediatric oncology, which I've unfortunately had a major crash course in over the last few months, unfortunately, things are going well. It's been very clear at the hospital on a daily basis that the models are better than the residents and they really do go toe to toe with the attending oncologists.
43:40
Can I ask you actually a question about that? Better at what? Because, like when. When you said. When I said the. The frontier models are better than the median lawyer. I don't know. I always hear like, Ethan Mollik in my mind when I talk about this, about the jaggedness of it, because when I say they're better, I mean, like, they're on average better, but in certain ways they're like, vastly superior. And then in certain ways they're completely incompetent. So, like, when you average that out, you kind of get a better. And I would imagine something similar for medicine too, where, like, on certain diagnostic tasks. Right. Or certainly explaining things in more layman's terms, they're vastly better. But. And again, I, you know, I've thankfully never had this experience that you're going through, but I have sort of two small children as well. And I can only imagine that in a situation like that, the bedside manner of the resident and the attending and the nurses, you know, with small children, like, that's so important. And so like, in those sense, I think we're a long way from these models being better. I don't know. It's like the idea that, like, a job is a bundle of tasks and only some tasks necessarily get replaced by AI. I guess it's kind of how I think about it.
45:03
Yeah, Well, I think in the hospital, I mean, it is a very different domain in the hospital.
46:01
The.
46:06
The tasks are. They're grouped into multiple bundles. Right. So for one thing, I would say the Nurses are at much less risk of competition from the language models than the doctors. You know, the person who comes along and my poor kid, again, he's doing much better, but, like. And he's acting much better. In the early days, you know, he was feeling terrible and all this stuff was happening. It was all very scary. And he could probably tell that we were scared. And, you know, he was not easy to deal with at times. So that mostly is, like, a nurse's problem and getting him to put the blood pressure cuff on or, you know, get his temperature taken. There is definitely a bedside manner component to that that certainly, like, the language models are not really touching at all with the task. You know, it's funny, we've had, you know, we've got this, like, IV tower that kind of stands there all the time. And when the thing hits a end point of a medication it's giving or, you know, the IV drip is about to run out, whatever, it starts beeping. The doctors don't know how to use that thing at all. Like, they literally can't do it.
46:07
I have had that experience as well.
47:09
So it's funny how really, like, the lines between these bundles of tasks are, like, pretty sharp in. In the medical context, the things that I've seen for the residents, the AIs are. I'm not seeing too many weaknesses relative to the residents. The one area that I do see the human doctors still having a bit of an edge on is the kind of holistic, multimodal assessment of the patient, which I, as a parent can. And if I was. If it was my own self and I was of, like, at least sound mind enough to do it, I could do this for myself in the same way I could do it for a kid. But if I write a paragraph or so about, like, generally how he's doing and what we've observed over the last however many hours and put in the test results and whatever, I would say that AIs are, like, clearly better than the residents. And again, like, pretty much toe for toe with the attendings. There is some times when you'll have a sort of something I say to a language model might cause it to come back with a certain concern. And then I become concerned about it. And where I think the doctors have added value relative to the language model has most of all been saying, I'm just looking at him breathing, I'm looking at his color, and he doesn't seem to be in distress. And I really don't think we need to worry about that right now. That's been the main mode where I think it's usually my understanding of what's going on in language models is yes, they are definitely reasoning, though there are also some aspects of stochastic paratry still on the margin. So I think it's oftentimes like just a particular word or phrase that I use that kind of bring loads in some concept that now is worrying me and they can put my mind to rest. Anyway, I don't know what the equivalent of that is in the law. And I'm also though, wondering what is the equivalent of prescribing, because we do have the general sense that in law you can represent yourself.
47:11
Right?
49:04
I can represent myself if I'm accused of a crime, I think I can pretty much represent myself in anything. Right. I can certainly sign contracts for myself without needing to hire anybody. So if I'm thinking about this sort of legal desert scenario, and I'm thinking like the model is already better than the median lawyer or whatever, and potentially better than that, if I were to clone the lawyer, the closest lawyer in a legal desert still, the model might be better. Right? Like, why don't, why do you, Is there a, is there a, a sort of barrier or is there a place that they, that the legal profession can fall back to? Like doctors are presumably going to fall back to prescribing. That would be sort of the thing that, like, yeah, you can, you know, talk to ChatGPT all day, but you want the medicines that come through me. Is there a version of that in law that will prevent like just every random person from representing themselves with language model backing? Or is there not, or do you think there will be one that will be created?
49:04
So I think it's important to flag that every state manages its practice of law. So every state has a state bar that dictates who's authorized to actually practice law. You know, typically you have to go to an accredited law school, you have to then pass the bar exam, and then you have to maintain for a series of years continuing legal education in order to represent someone, for example, before court. Then we have unauthorized practice of law statutes. And so this is where each and every state basically forecloses someone from saying, hey, I'm on Craigslist. Trust me, I've read every law book. Let me represent you at half the rate of the attorney down the street. Right? It's that unauthorized practice of law statute that forecloses you from being able to do that. And it's those UPL statutes, as we refer to them, that have prevented things from Legal zoom, right? Like, they ran into a ton of hurdles in terms of just doing things like wills and some real estate agreements. Because you had the guild, the lawyerly guild, defending itself against these new tools. And so there's going to be a lot of friction for a while in terms of tools. Like, for example, I got to talk to Shlomo Clapper. He started a AI startup called Learned Hand, which for non lawyers, he's a very famous judge, so it's meant to be pretty funny. But this tool is helping judges, for example, and helping law clerks who assist your judges write better opinions and write them in a faster fashion. And to your point, Nathan, I think the thing we're going to see ultimately, or the thing I hope we see, is that we use these new AI tools to address some of the instances in which we see justice effectively be denied. Because justice is so delayed, most folks don't pay attention to the fact that 95% of all litigation occurs in state courts. And if you've ever had to go before a state court, they are not known for efficiency. You can be waiting months, if not years, trying to get some dispute resolved. And then when you get it resolved, you may have gotten a judge who's just not good at their job. Right. Or maybe they were hangry when they were writing your opinion, or maybe they have something going on personally. And the outcome of that dispute then isn't based on the facts. It isn't necessarily grounded in the law to the extent you hope it is. And so we get arbitrary decisions, we get random decisions that, in my opinion, shouldn't be a characteristic of a good legal regime. Right. The idea, in my opinion, is that everyone should be able to enforce their full rights and realize their rights. And yet we rely on an adversarial system in which basically, to be blunt, whoever can pay the most money wins. That's really messed up. But that's typically how the law is resolved in a lot of these cases. Because who can ever pay their lawyers for the longest can survive more or less this adversarial approach if we instead move to a more systematic, consistent approach to handling the lower level cases, to handling these more basic disputes. The role for lawyers then becomes managing what that legal regime should look like in the first place. Right. Trying to set at a higher level, how should we structure society and structure the incentives such that they align with whatever that community's values are? And so that's the role that I would say our appellate court system plays right now. Right. You think of the U.S. supreme Court or a state supreme court, they get to play the sort of higher level role of how should we shape laws more generally? And that's the role that I see for lawyers in the future. Doing that more hands on approach of thinking through the ultimate ends of the law and making sure that the system is working in a consistent fashion rather than the sort of ad hoc just hope you get a good judge flip of the coin scenario right now.
50:03
I love that vision and I listened to the episode, which is definitely a hall of fame first ballot, all name team hall of fame for both a judge and a legal startup. Okay, I definitely want to unpack a little bit more like what this vision of the future of law looks like, but just let me put you on the spot for a prediction. Do you think we're going to see states pass laws saying ChatGPT can't give legal advice to protect retail lawyers?
54:21
I certainly think we're already seeing that some state bar associations have significantly limited the instances in which lawyers can use AI. But on the other hand, we're seeing states like Arizona. Earlier Alan mentioned that only lawyers can own and manage law firms. Arizona just became the first state that upended that and now allows for non practicing attorneys to own or non practicing non lawyers generally to own and start law firms. And we've seen states like Texas for example and Utah are leaning into regulatory sandboxes in which AI tools can be deployed with much greater ease. And as soon as folks start to see there's cheaper lawyerly tools available in other states, they're going to move their companies to those states, they're going to handle their disputes in those states, and we're going to start to see the law filter there. That's going to be where the pressure emerges from, not from state bar associations waking up one day and saying, you know what, screw it, let's just go with the AI. I think it's pretty dang good, but it will be that sort of competitive dynamic.
54:52
Yeah, I would also say I, I think it's going to be hard, especially in this era, to try to stop general purpose chat bots from giving legal advice. I think both from a legal perspective, unauthorized practice of law statutes always raise difficult First Amendment issues. Because it's one thing to say, okay, you can't represent yourself as a lawyer who can go into court and okay, fine, that's one thing. It's another thing to say you can't talk to someone, someone can't talk to you about an interesting legal question. That's core First Amendment speech and obviously There are these like blurry lines you have to draw. But I think that it's going to be hard to have such a broad limit on the output of AI models, which I think is pretty clearly protected speech. Who's protected speech? Is an interesting kind of almost metaphysical question. It's not really that models don't really have rights and the companies, I'm not sure, have First Amendment rights in models, models that they like themselves barely control. I think users and listeners have rights in communicating. But that's kind of interesting, maybe. Academic question, I think so that's the legal reason why I'm skeptical that you'll have such broad prohibitions. I think also it's just, it's too embarrassing to do that. I think enough people have used these models and understand how useful they are. It's just going to be such obvious guild protective self dealing to go out and say, henceforth we ban the use of ChatGPT to tell you interesting things about the law in the state of Minnesota. Now what I do think the compromise is going to be is, look, if you want to do certain kinds of legal transactions, you have to go through a lawyer. And I think this is where earlier you were asked about, can't you represent yourself always yourself? It's an interesting question. I actually don't know the rules about this. Certainly if you're too poor to have a lawyer, you certainly can represent yourself. It's an interesting question whether if you're rich enough to have a lawyer, you can nevertheless say, I'd like to go into court and just represent myself in prosecuting this civil lawsuit. I don't know if you can do that. Kevin, are you nodding because you can do that or you're nodding because you can do that?
55:59
I'm fairly certain you can say, yeah, I'm just not going. You can represent yourself pro se and just say, screw it, here we go.
57:54
But my question is, and I just know the answer to this, if you're in a civil context and you say, hey, judge, I'm an officer of pro se, can the judge say, no, you're not. Right? Because you're. Because I don't want to deal with you pro se. And you're not a poor person, so you can afford to have a lawyer, so I'm going to make you have a lawyer. I just don't know the answer to that question. It's not something that people have really had to think about because if you were rich, or let's put this way, if you were not poor, there was the chances of you getting a good outcome, representing yourself was so low that you just paid for a lawyer. The thing about AI is that it changes that equation, right? Where even if you're rich, the marginal benefit of a real lawyer is not always necessarily going to be that high. Maybe you just Pay for your $20 a month ChatGPT subscription or if you wanna be really fancy, your $200 a month subscription so that you can have the pro model and get really good legal advice. So maybe the compromise is going to be that there's a lot more free floating chat legal advice out there, but the bar associations and, and the state courts get a little more restrictive on. Yeah, but at some point in the process, you need a human lawyer. Either because they think that actually adds value and provides consumer protection or just improves the legal system, or just as pure guilt protectionism, or as is usually the case with these things, a mixture of the two. You're seeing, I think you're seeing something similar with, with medicine and mental health treatment where it's very hard, I think, to say ChatGPT can't give you medical advice. We're not going to let chatgpt, we're not going to let you upload your test results or your kids test results to ChatGPT so you can get a second or third opinion. But we are going to hold the line on yes, but if you want the morphine, there has to be a human doctor that writes a prescription for that.
58:02
So I asked Claude, by the way, it says that your right to pro se representation is strongest in criminal trials. There are exceptions related to mental competency, timeliness, disruptive conduct and standby counsel. Judges can't appoint advisory counsel over your objection. It's weaker in civil cases, as you suggested. Corporations and other entities, some appellate courts, some circuits have held there's no constitutional right to pro se representation in criminal appeals and certain specialized proceedings, including immigration courts, et cetera. So as always, it's complicated. Okay, so the vision for the future, I think the point about whoever has the biggest budget tends to win is depressing reality. And certainly one of my great hopes for AI broadly is by making access to expertise far more universal and far more accessible, far more affordable, et cetera, that lots of things could be better and more just. Society is one of the great promises there for sure. How do you see that kind of working in practice? I guess one thing that I, maybe this is wrong, but when I think about the bigger budget translates to winning, I imagine that being like, maybe a reflection of like too much law or because what are they doing? It seems like there's just so much law out there. There's so many things I could argue, there's so many precedents I could bring in that I can spend hours and hours indefinitely almost. And that to me suggests we might need, like, a simpler system in some ways. But that contrasts with your kind of earlier vision of like, certainly more extensive contracts, which I also projected into, like, maybe more extensive or more exhaustive maybe is the right word, legislation in the first place. So what does that look like in your mind when. How do we get to this actual justice, let's say we all have infinite AI lawyers. How does that translate to justice? What does that look like?
59:43
Yeah, so it depends a lot. It depends a lot on what the kind of marginal utility curves look like of extra legal thinking. Right? So, like my hypothesis, and this I, like no one knows the answer to take this for what it's worth, which is not a lot. But my, my intuition, and I'm curious what Kevin's is going to be, is that the reason law has gotten so expensive is that if you think of law as a kind of combinatorial search space of arguments and precedents, and can I find in this billions of documents, like the one sentence that is going to show that my client should prevail in this contract dispute with your client? Right. If you think of it as we have to search this very large combinatorial search base, largely that search had to be done by humans. Now, obviously, legal AI sorry, legal tech long predates legal AI, right. It's at least 50 years old, back to the dawn of digitizing legal databases. So Westlawn, Lexis, which is the main databases lawyers use, these are very old companies. They used to do everything with paper books. And then in the 70s and 80s, they digitized everything. That was a huge deal. Right. And more recently, you've had even some machine learning based, like discovery tools. But nevertheless, you still need a lot of human beings to lock them in a conference room to do discovery. And those human beings are extremely expensive. Human labor is extremely expensive. So because there were still the cost of that extra human labor was less than the marginal benefit of exploring a little bit more of that combinatorial search space. The effect was to increase the aggregate cost of litigation, as Kevin mentioned earlier. Okay, so now imagine a world where you have AIs, and they are 10,000 times right. They are three orders of magnitude or four orders of magnitude more effective than the current ones are. And they're also four orders of magnitude cheaper. Right? So you're getting like something that's effect a million times better. Right. In the next few years, that seems totally plausible. If you look at like the epic AI log curves and stuff like that seems totally plausible to me. In the next few years, you may get to a point where that actually exhausts the practical combinatorial search base of legal moves that are actually helpful to you. There's just no more precedent to explore. You just have read every single sentence of every single piece of electronic discovery. At that point, the arms race ends a little bit. And now there is a natural ceiling on the cost of legal services because there's just nothing more to spend on. That seems plausible to me. Right. It's also plausible that's not the case. And lawyers will always discover ways to increase the combinatorial search space, and so it'll always be more expensive and et cetera, et cetera. If in 10 years, Kevin's very optimistic vision comes true of the kind of democratization of legal services, I suspect it's going to be because we've just exhausted the scope of legal stuff to do. And here I'm actually arguing a little bit against myself because now I'm. Now, I'm now talking myself into. Nathan, your point earlier on, that maybe law is a bit more like dentistry, where at some point your teeth are just clean and they can't get cleaner. And so I just don't need more dentistry than that. I don't know. And the problem is we're trying to predict these dynamics, these networks are all. They're all compounding. And so tiny differences in what you think, the percentage rate of improvement versus cost reduction versus like, how big will the legal search base increase? Tiny differences can lead to massive changes in your predictions over the next 10 years, which is why I think there's a lot of uncertainty in trying to predict the effect of AI on law or medicine or computer programming or investment or whatever the case is.
1:01:48
I'll just add that if you look at a civil procedure textbook, you'll see that the way litigation currently works right now is basically a series of very complex procedural steps. And everyone always has at their disposal a number of kind of motions that they can throw out there to just delay the process further. And some of those can be in good faith. Right. You want to challenge that the litigation should proceed to another step because perhaps the other party hasn't actually made any valid legal claims, or, or perhaps you want to challenge the kind of source of information for different legal claims, so on and so forth and so It's a lot of procedure, it's a lot of process. And what I think can really start to reorient things, as you were teen up Nathan, is what if we start to move towards outcome based law, right, where we change the orientation, not toward how many steps can we march to resolve this one very narrow dispute to both parties want to see X happen. And now our agents who have been trained on our incomes, on our preferences, on our aspirations, on our professional goals, so on and so forth, can autonomously be acting on our behalf to continuously update whatever agreements we've reached with other parties or other corporations to achieve that end. And that is to me, the more very optimistic, right, and very sort of sci fi, but something that I see as eminently possible. That's the outcome that I think we may eventually work towards, which is to say, let's make sure the law is oriented toward what we actually want to see and not just the sense that we should assume that more procedure or more process is better. In many ways. This is what Professor Nick Bagley has coined the procedural fetish of lawyers. Our answer for trying to make everyone feel fair is to give them more opportunities to speak up. But usually it's not a representative sample of folks who actually show up at those opportunities to speak out or to get involved or to throw gum into the cogs of the system. So how do we actually achieve what we wanted to from the outset in passing that law? And that's the sort of outcome orientation that I think we could achieve if we lean into this.
1:05:38
So I guess I don't really know what we're trying to accomplish in some of these contexts. Like for starters, going back to the Learned Hand episode of Scaling laws, one thing I was struck by there and your description of all this process and the fully exhaustive set of things one might do to represent their clients reaching an end state, then you think, geez, I feel bad for the judges. And so very much. I was struck listening to that episode that the judges are in a similar position transition to doctors today, where I think they're just overwhelmed by stuff, by and large and welcome to help. That's been my sense of how the doctors are typically feeling. They're like, I got hours of charting to do when I get home. So if something can handle that, that's an easy win. And if you can come prepared to be a better, better patient, for lack of a better term, in the management of your own health, that's great. That's a great win for me too. I really have not seen I've seen some skepticism, but I really have not seen any hostility or sense of threat in the, in my experience in the medical system. I do think a big part of that is just because they're overwhelmed and they know it.
1:08:02
So help is welcome.
1:09:09
It seemed like that was the vibe that, that the judges have too. But now I'm wondering like, okay, we got one vision here that is the sort of every corner case of agreement is articulated in advance. And this seems to kind of line up. And I'll preface this by saying I don't really have a great command of these terms or a deep understanding. But in prepping for this, I did some research and hit on something that, which a study that showed that GPT4, which already shows that the work is dated, that's just how it is. Obviously in these spaces a lot of times was more of a strict formalist, which was contrasted to the human judges which were described as more legal realist, which correct me, but I think basically that means like GPT is following the letter of the law and the judges are doing what I think the Supreme Court is often criticized for doing, which is like making the decision it wants to make and then justifying it however it wants to justify it. But I'm like torn on which they should be doing because at least historically I don't think we've written laws so well that following them to the bizarre conclusions that one might, if you were just going to be truly formalist about it is obviously a great way to go. At the same time, obviously you've got room for bias and all sorts of problems if you just let people exercise their judgment too freely. And that's why we have a whole legal system. So it's not just people getting to just dictate how things are going to go with no checks on whatever they want to say. And then we've got Claude Constitution, which is. I think Amanda Askel has made really interesting points around. They don't want to just give Claude a long series of rules that it has to follow for multiple reasons. But one of the, I think the most compelling one that she articulated is we believe that if the model knows that it could do something that would be better for the person that it's interacting with, but it has to follow these rules. She worries that it might generalize in a problematic way where the model and they've seen this in like reward hacking context and other experiments where if the model reward hacks and starts to develop some sort of self conception as the kind of thing that reward hacks, then it becomes, like, more evil in general. And so she thinks a very analogous problem would be if a model knows that it really could do something better for you, but it follows the rule and doesn't, then she's worried that that could become a problem where it's, what kind of person does that and how does that kind of person behave in other situations? And obviously, like, just following orders doesn't always age well. So I don't know how to tie that all up into a question. But it seems like we have a sort of desire for edge cases to be all spelled out and everything to be in black and white so that we know in advance what doing we're getting ourselves into. And maybe we just haven't been able to push that to the extreme where it can actually work. But then we definitely are getting a different signal from anthropic right now where they're saying, like, we don't even want to try that. What we want to do is get our AI to have the best possible judgment that it can have so that it knows how to be good, even in highly ambiguous situations. So I guess, do you have a sense for which way the law ultimately shines?
1:09:10
I want Alan to take the first stab at the fog constitution answer here, because he's got some deep philosophical views. I do want to briefly hit on the sort of use of AI to precisely and perhaps perfectly try to read the law as it's written, right, In a sort of clear formalist mentality, like you were mentioning, Nathan. I think the issue with that is one of my favorite questions that always gets raised in any good statutory interpretation exercise, which is imagine you're going to a park, and there's a sign right when you're going to the park that says no vehicles allowed. Okay, so is a drone a vehicle, is a stroller, a vehicle, is a scooter, a vehicle, is an ambulance a vehicle, so on and so forth. There's so much ambiguity, even when the drafter of that rule may have thought, oh, vehicle, I've nailed it. Clearly I was only referring to a car. And therefore everything is settled. And so that's why we've always had some variance from perfect formalism or perfect textualism, as many lawyers would refer to it. It's just saying, whatever the law is as written, we're going to apply it. We just don't have the words for every scenario. Now, obviously, AI can assist with coming up with way more many words and way more many laws, theoretically, but that's not the sort of world I think any American wants to live in. We have a common law system here, not a code based system. If you want to experience a code based system, go live in the EU where they attempt to try to govern and regulate more precisely every kind of behavior. Whereas in the US we've tolerated some degree of ambiguity based off of the reason that we need an iterative emergent approach to discovering how it is we actually want to govern ourselves. The trick for AI and the trick for legal adoption of AI into adjudication is finding out how to use a system that can create more words, that can resolve textual disputes with greater consistency and in a greater fashion, while still allowing for that emergent process to continue. Because I think between Alan and I, and for a lot of folks having a world in which you don't feel like, okay, if you step on this crack, you are automatically going to receive a penalty in the mail and it will be sent to you within five days and taken out of your bank account. That's a scary world that I don't think any of us want to live in. And so maintaining this balance of, as you alluded to in the cloud constitution, higher level rules that guide us generally and then enforcement of those rules is a really tricky issue that could be the subject of a whole legal seminar. Maybe we should just get one on the book. Sal.
1:12:22
Yeah, I think that'd be fun. Yeah. So let me see. Two things, let me say one about the use by judges and then the kind of the broader cloud constitution question. So I was lucky in that I had the opportunity, opportunity to go and talk to some Minnesota state appellate judges. So these are state courts, but they're appellate judges, so they're a little bit removed from just like the absolute crush of the trial stuff. And one thing that I was surprised about was how actually open they were to potentially using these tools. There's a lot of skepticism, I think, which was appropriate and some hesitancy. But again, there's not the sort of tomato throwing that I thought you would expect. And these are judges, so they tend to be on the older side, frankly. So you can imagine a kind of natural immersion. There wasn't that much of that. And again, I think if you just spend an hour just Talking to the $20 version of Gemini or Claude or ChatGPT, you just really quickly realize whatever long term societal effects, this thing is pretty useful. And so I do think we're going to see a lot more of it. How judges use it is tricky. And I think the kind of research that you mentioned about GPT4. Again, it's unfortunate that these things get to be out of date pretty quickly. We need a better research pipeline to have these evals come out within a month, not within a year and a half. But I would also say that I did not take that research to say that GPT4 is textualist, and therefore it must be textualist or formalist rather, and therefore models must be formalist. It's just that for whatever reason, that model, in the way that it was trained, and it's GPT4, so there probably wasn't like specific legal RLHF in the way that there may very well be with these newer models, and certainly with the legal specific models. Just whatever reason, the way it was trained meant that on some corpus of legal questions, it gave a kind of more formalistic answer. But you could have a model that gives a much more functionalist answer, right, which is less concerned about the specific language of the law and more like, what were the legislators trying to do? And how do we apply that to this question of no vehicles in the parks, right? And should a drone be a vehicle? And I think you're right to view the Claude Constitution, to go to answer that part of your question, as taking a position that in some sense you want reasoning, whether it's artificial reasoning or human reasoning, to be very. To be. To operate more at the level of principles and at the level of rules. But the thing that I would say is I would push against thinking about this as a binary. There are no pure textual legalists in the world, right? There is no one who is so committed to the letter of the law that they would not consider the purposes of the law, or they would not deviate if there was obvious mistake in the law. No, no one exists like that, right? Similarly, there's no one who's such a legal functionalist or legal realist that like, they don't think that the legal text binds them at all. Everyone is somewhere in between. And frankly, most people are actually relative to what the spectrum actually could be. They're pretty clustered in the middle. Fifteen years ago, this was reflected on the Supreme Court by Justice Antonin Scalia. On the formalist end, he literally wrote a law review article once called the rule of law is the law of Rules. And then on the other end by Justice Stephen Breyer, who would often start with this is very complicated. Here are 17 factors that I'm using to think through this problem problem. And they actually, I think, went on almost like a buddy cop tour of lectures around the country where they would debate in a good natured way. And it was like fun to watch. But what you really realized when you saw this was that like they were basically pretty in the. They were basically all in the middle and Scaly was like on one end of the middle and Breyer was on the other end of the middle. So I think the lesson from that and I think the way that I would read the Claude Constitution documents is that you need an intelligence. Any intelligence, whether again natural or artificial, needs to be able to operate both at the level of principles and rules. And that a lot of what we think of as judgment or to use the kind of fancy phrase from Aristotle, phronesis. And I mentioned Aristotle because to Kevin's point about my philosophical interest in Claude's Constitution, when you read that document, you really have to appreciate that it was written by someone who has a PhD from one of the best philosophy departments in the country in moral philosophy, right? Amanda Askel understands academic moral philosophy. She has read the Nicomachean Ethics. And at least as I read Claude's Constitution, it is footnotes on that document, which is in no way a criticism, right? I think all ethics should essentially be footnotes on Aristotle. Read her as saying Aristotle was right in that it's very hard, basically impossible to derive any set of comprehensive set of rules of ethics. You need to have a real sensitivity to principles. But that doesn't foreclose the use of rules in a particular domain. Because sometimes the best principled approach to an ethical domain is to say it'd actually be really helpful to have some rules in this specific ethical domain. And in fact, when you read Cloth Constitution, it toggles between high level principles, right? There are like 17 of them, quote unquote, in no particular order of priority. Okay? And then there are a couple rules where there are no principles applied. Claude will not create child sex material. Right? Like you can have a debate with Claude about the principle. It will not do it. Right. Claude will not create or at least hopefully unless it's jailbroken. But then something terribly has gone wrong by design. Claude will not help you develop airborne Ebola or something like that. It just won't do it. So even there there is a recognition. So I think the question to me is not so much should we do rules or standards? Should we do principles or technical rules? It's always a yes. And it's how do you tune that distribution between those two? And I think what really excites me about AI is that we're able to do. People sometimes talk about there's like in vitro experiments and in vivo experiments. And then there's this new thing called in silico experiments where you try to take some part of like human life and you're trying to model it in a machine. And the benefits of that are that in silico experiments can be done at speed and at scale that are so many orders of magnitude faster than doing anything in the real world. So one thing that excites me as someone who's interested in law for law's sake, is that we can run experiments within machine learning models about how does a well developed legal system work and exactly what should the distribution be between principles thinking and rules thinking that you could never run in in the real world. So I wrote this Law Fair piece recently by Claude's Constitution and I end it with this just reflection that we've been debating this question of rules versus standards and ethical reasoning for literally thousands of years. What's cool about these machines is we can run the experiments now. And I think we're going to learn a lot, not just about machine intelligence in the next few years, but about human intelligence, because we can now simulate it at scale and tune the dials with precision in machines now.
1:15:24
And just to add that on to a human law context is I think future generations are going to look back at the level of sophisticated AI tools we had available right now and are going to be flummoxed that we weren't asking our legislators to run proposed laws through simulations about their intended effects and their likely outputs. Similarly, with respect to judges writing opinions and not asking, hey, find all the ambiguities that are latent in this text before I publish it. They're going to be like, what the hell? You had this ultimate tool at your disposal to catch blatant errors. What are you doing? And so I think this is a great model for folks to follow with respect to that simulation idea.
1:22:10
What are my mantras for AI that you're calling to mind is AI defies all binaries. So I definitely that your response there that it can't be all one or the other. I've yet to find a good exception to that general guideline, general expectation. What do you. How does this simulation. I get really excited also about in silico experiments when it comes to science. Can you sketch out what that looks like in law? Do we start with a bunch of scenarios and like what we think the right outcome should be and turn them into an eval like we turn everything else into an eval? Or is it am I living in one of those simulations right now, perhaps.
1:22:59
But I think one of the more promising things is forcing legislators to actually do their job, which is difficult, which is saying, what do you actually want to have happen with this law? If you look at something like nepa, the National Economic Protection act may get it wrong. Everyone just calls it Environmental Protection Act. Everyone just calls it nepa. This is the law that is famously flummoxed, the ability to build affordable housing in a lot of communities because it creates a lot of veto points for individual stakeholders to find a way to gum up the wheels of new development. And my hunch is that we could have forecasted some pressure points that may be exploited by bad actors or perhaps well intentioned actors who are just more expressive than others and identified, huh? Is this actually resulting in the sort of pro environmental, pro green or pro climate change or anti climate change outcomes that the drafters of that legislation were actually hoping to achieve? And so now if you ask legislators, hey, what are your explicit goals with this legislation? What problem are you actually trying to solve? And then create evals based off of, okay, have we seen a reduction, for example, in carbon emissions? Have we seen a reduction with respect to, let's say, a congestion pricing bill in the number of cars going into the city? Those are all things we can evaluate and map out. And so that's the forcing function to me is saying, hey, if you're going to propose a law, what is the problem you're actually trying to solve? And then that becomes the core source of information.
1:23:40
What should we talk about very briefly in closing? I mean, I like the idea of essentially red teaming. I've never been very involved in a red teaming of a bill process. Up until SB 1047 last year, there was a lot of red teaming of that. And that was pretty interesting process. And I do think everybody ended up agreeing. I have become friends with Dean Ball, who like led the initial critique of that bill with his writing online. And even he came out toward the end like much happier with it than he was at the beginning. So I think everybody agreed that putting it through its paces and really gaming out, like, how are different actors going to respond to this and are we really going to achieve what we want was it was a pretty successful process to think that could be done in general sounds like a very promising enhancement to our legislative process. Good luck talking members of Congress into that. I will see. I don't know how aligned they are to the first misalignment we may encounter. It might be the elected officials and their constituents, but nevertheless, I like the idea. Maybe just in closing, what other kind of big ideas do you think people should be thinking about more? One that I've floated a few times is what new rights could we introduce in virtue of the fact that we now have scalable intelligence to apply to all sorts of problems? You have the right to remain silent. If you don't, if you can't afford an attorney, one will be appointed for you.
1:25:19
I think you should have a right.
1:26:49
To ChatGPT or similar. And I imagine that my ideas there are limited by my lack of exposure to the real problems in the system. So I'd be really interested to hear what other rights you think people might ought to have in virtue of AI existing or what other just big ideas on the level of run detailed simulations of your laws before you pass them. You think people should be thinking a lot harder about than we have so far.
1:26:50
Yeah, I'll go first and then Kevin, you have the last word. So I definitely think you should have a right to use these models in the sense that I think the First Amendment is probably the right kind of legal home for that. I think you already do. I think this will come up at some point, but I don't think courts are going to have much difficulty saying that people have the right to. To access these tools in the same way that the right to access libraries, to read books. That's the kind of negative right which is say you have the right to not have the government forbid you. There's a corresponding positive right which is you have the right for someone to give you compute, essentially. And there's all interesting arguments about various kind of public options. They're often discussed as public options to build models, but I think in some sense public options to give people compute credits. Right. Compute budgets might be interesting. You could write a sci fi story. Or I think I could get Claude to write a pretty interesting sci fi story where in the future the currency is compute. Right. The main credit that people pass around is the credit to compute, because that is so valuable. And to your point, Nathan, about how AI dissolves all binaries, I tend to agree. With the exception of one, which is on the binary of there is a limit to how much compute is useful in the world and there is no limit. I think that AI shows that there is no limit. And so I think in that sense AI is in extreme, not in the middle. To me though, I think. And Kevin sometimes rolls his eyes at me because I think he thinks I'm too credulous about this. I think the question of AI welfare, which is to say the wealth of these models and the legal implications of that is something that is very easy to dismiss but is going to be an increasingly important issue either because these models, as an actual kind of cognitive or metaphysical matter, will become increasingly sentient. My brain tends to break when I think about that, but I have trouble ruling it out. But I think more importantly, actually more immediately, because I think as these models become more personable, as people develop more relationships with them, as memory of these models improve, improves, and the more I talk to Claude, there's a point at which Claude knows me better than like my wife does, which is totally plausible because I just talking to Claude constantly for everything. If you combine that with real time voice and video, where suddenly your AI chatbot has an avatar, right, that you can interact with, and then certainly once that AI avatar is embodied in sort of robotics, which I think is going to happen, it'll take a while, it may take longer than we think. But I'd be shocked, I'd be really shocked if in 10 or 15 years we don't have very convincing real time AI companions that people get extraordinarily attached to. What sorts of rights will people demand for those models, I think is something that could cause real societal cleavages. Because I think you're going to have groups of people who are really committed to the idea that these models are for many practical purposes sentient entities that we are enslaving or at the very least potentially treating very poorly. And then you can have other people, and I think this may actually be a source of really interesting religious cleavage in the next 20, 30 years who think that the very idea of models as sentient is a literal affront to God. It's like it is a kind of idolatry that the only correct response to is a Dune style Butlerian jihad. And there's gonna be this like messy middle people who are just like, I don't know what's going on, I just want a chatbot, I think that's going to be a very difficult transition at the legal level, certainly, but especially at the social level. And I think people who say, nah, that's not going to happen, that science fiction, I don't know, I think they're, I think they're fooling themselves.
1:27:13
So I'll say the negative right that you all were referring to, I think is generally encapsulated within the idea of right to compute. And if this is the first time you're hearing about the right to compute, it's actually been enacted In Montana, there are bills in Ohio and New Hampshire, and I believe a couple of other states advocating for the right to compute. And I believe this is one of those major rights, Nathan, that folks are going to be clamoring for sooner rather than later, basically saying that we do need additional protection against the state, really infringing your access to computational tools of all kinds, not only AI, whatever is coming down the pike. There should be a higher threshold before the government limits your ability to express yourself or to receive information via these new tools. The one that I think is also very interesting in this world in which compute is obviously a scarce resource, that's very important. The other one that we keep hearing about, but few too few people are discussing, in my opinion, is data. And I think the right to share, meaning the right to share your data as you see fit is a really important right. Because right now, if you want to share, for example, your kids educational information with a new AI tool provider, because you want to train the best AI tutor out there so that your kid who perhaps learns differently or, or just you want to learn a different curriculum, can make use of that AI tool. Ferpa, the federal privacy law that applies in that context, is a real burden to being able to share as much data as possible as regularly as possible without literally signing things and doing so on a yearly basis. And I think that individuals, if they want to share their data and want to make that a frictionless process so that they can train better AI for their own personal uses, that should definitely be a thing, because we all don't have the ability. For example, what is it to go to that fountain? Is it fountain, the fountain of youth thing that all the like super healthy people are going to and they're downloading all of their data, they're getting all these scans, then they're sending it to some AI outfit to recommend personalized health outcomes. That's awesome. But only wealthy folks can go spend a week in Florida or whatever that is downloading everything about themselves. The rest of us are just left with whatever Walgreens told us at that last checkup. So let's make it as easy as possible for folks to use their data as they see fit. And that to me is a promising outcome under the right to share idea.
1:30:56
What about things that we maybe should be thinking about restricting the government from doing? Because I do have the sense now that we're probably already in an age. It's been whatever 10 years since Snowden, and I'm wondering, if there was another Snowden, what would they be telling us. And I would have to guess that we've got some sort of LLM Dragnet kind of phenomenon going on somewhere. And it seems there's this adage generally that everybody's committing a felony a week or whatever, and it's just a question of your security through obscurity and nobody's really targeting you and whatever. But that could change very quickly. And we're starting to see obviously like weaponization of the Justice Department, et cetera, et cetera. Should there be new restrictions on what the government can do with AI?
1:33:31
Yeah, I think that's hugely important. So actually I wrote a piece for Lawfare a few months ago. I gave a speech at a law school called the Unitary Artificial Executive. All about this idea that one of the effects of AI and near term AI, not like speculative of AI, but near term AI is to hugely increase the power of the executive branch and the President in particular, both because of all of these additional abilities that AI gives the President, like perfect enforcement, surveillance, creation of propaganda at massive scales, all that sort of stuff. And then also for the President, him or herself a much greater ability to control the executive branch. Right. Which is millions of people and is very hard just as like a bureaucratic management exercise to control. But if you have have an AI that is trained on the President's preferences, that's injected at all levels of the bureaucracy, is reading all the emails, reading all the texts, you can have a situation where the President really controls in a much more practical way than he's ever been able to, whatever his legal authorities might be, the executive branch. And that's at the very least complicated. Right. It might have some benefits because elections should have consequences and the people voted for some for person A and not person B. And so presumably the executive branch should reflect that. On the other hand, again, I'm calling in from Minnesota. It's not hard to imagine the potential abuses of that. And so I think that one of the really important issues in the next, let's say decade, because the government is slow to adopt technology, although it does inevitably get there, is going to be. How do we, on the one hand encourage, because I'm fundamentally an AI optimist, encourage the government to use AI to reduce, really improve government services, to increase state capacity, which is something that our government has not always been good at and I think is part of the reason why we're seeing some part of the, some fraction of the societal discontent. The kind of burn it down mentality is this feeling like we're paying a bunch of taxes and the government's not doing anything useful. AI can really help with that. On the other hand, you don't want to supercharge the government through the use of AI. And how to how to figure that balance out is very tricky for me. I suspect it's going to be the thing that I think my main thing that I think about for the next few years as an academic. But it's far more important for the legislators and the bureaucrats and the company executives who are selling these tools to the government, the politicians and executive branch officials to figure this out as well.
1:34:22
And just quickly, I'll add that I think here there's some real concern around updating the Fourth Amendment that we need to pay attention to. There's some folks who've realized that, hey, in theory, the government now has an incredible ability to tap into basically every system for detecting and picking up audio. If you're speaking publicly, just hanging out, saying whatever, talking to your friend, the idea that all of that audio information can now be hoovered up, analyzed, synthesized, and then studied by the government to see who's planning what, who's thinking what, who wants to do what, all without real notification, that's tremendously scary to me to just think about that sort of pervasive surveillance. That is the issue that I'd really flag. And I would just encourage, again on the positive side, for governments really to lean into regulatory sandboxes when it comes to testing new AI systems and erring on the side of saying let's try to deploy this tool and make sure that folks have noticed that we're doing so, have a means to provide feedback, but let's not be afraid of literally reinventing the wheel and improving our processes and improving our laws.
1:36:39
The rule of law and law generally never been more important. And the intersection with AI obviously ramping up and likely to become one of the big questions of our times in the next couple of years. Timelines are short. Scaling Laws is the podcast where you can find these two and get lots more of their thoughts and also just much deeper dives into everything that's going on at the intersection of AI and law. Kevin Frazier and Alan Rosenstein, thank you both for being part of the Cognitive Revolution.
1:37:50
Thanks for having us.
1:38:17
Thanks, Nathan.
1:38:17
If you're finding value in the show, we'd appreciate it if you take a moment to share with friends, post online, write a review on Apple Podcasts or Spotify, or just leave us a comment on YouTube. Of course, we always welcome your feedback, guest and topic suggestions and sponsorship inquiries, either via our website Cognitiverevolution AI or by DMing me on your favorite social network. The Cognitive Revolution is part of the Turpentine Network, a network of podcasts which is now part of a 16Z where experts talk technology, business, economics, geopolitics, culture, and more. We're produced by AI Podcasting. If you're looking for podcast production, help for everything from the moment you stop recording to the moment your audience starts listening, check them out and see my endorsement@aipodcast.ing. and thank you to everyone who listens for being part of the Cognitive Revolution.
1:38:19