How Should AI Be Regulated? Use vs. Development
A16Z partners discuss AI regulation, arguing that policymakers should focus on regulating AI use cases rather than development until marginal risks are better understood. They warn that regulatory uncertainty is already chilling US open source AI development, inadvertently giving China an advantage in the open source AI ecosystem.
- Regulatory uncertainty is more damaging to startups than actual regulation, creating funding and hiring challenges while advantaging incumbents with legal resources
- The US leads in proprietary AI models but China dominates open source AI, partly due to US regulatory uncertainty deterring companies from releasing open models
- Historical software regulation precedent suggests focusing on use-based rather than development-based rules, as seen with encryption and cybersecurity
- Current AI regulation discussions lack the robust multi-stakeholder discourse that historically guided technology policy, with academia and venture capital unusually absent from pro-innovation advocacy
- Effective AI policy requires understanding marginal risks compared to existing computer systems, which remains an open research question according to experts
"Open source is always a critical part of the innovation ecosystem because while it's not the number one business driver like the proprietary models is, it's what's used by hobbyists, it's what's used by academics, it's what's used by startups, and that tends to be the future."
"To claim that at the outset of the Internet you could have foreseen seeing how social media would develop, be used and misused is kind of a fairy tale. Like that couldn't have happened back then."
"If we focus on development and we don't focus on use, you end up introducing tremendous loopholes because it requires you to describe the system that's being developed. And right now there actually is no single definition for AI."
"This uncertainty in the regulatory environment is keeping US companies from releasing open source models that are strong. And as a result, the next generation, the hobbyists and the academics, are using Chinese models."
"I think uncertainty really is death. And in startups and we see it all the time. So for example, in the last two weeks I actually had an AI, very AI forward VC pull a term sheet from a startup which is probably going to kill the startup because they're just uncertain about the regulatory environment."
Open source is always a critical part of the innovation ecosystem because while it's not the number one business driver like the proprietary models is, it's what's used by hobbyists, it's what's used by academics, it's what's used by startups, and that tends to be the future. And so this uncertainty in the regulatory environment is keeping US companies from releasing open source models that are strong. And as a result, the next generation, the hobbyists and the academics, are using Chinese models. And I think that's actually a very dangerous situation for the United States to be in.
0:00
To claim that at the outset of the Internet you could have foreseen seeing how social media would develop, be used and misused is kind of a fairy tale. Like that couldn't have happened back then. It can only happen once the risks emerge and are known and then you can figure out what the bad things are that you want to regulate.
0:29
If we focus on development and we don't focus on use, you end up introducing tremendous loopholes because it requires you to describe the system that's being developed. And right now there actually is no single definition for AI. And everyone we've used now looks totally silly because it's evolved. So if actually the lawmakers want to have effective policy, the only area that you can actually specify is the use of these things.
0:47
In this episode, A16Z's Jayaramaswamy, Chief Legal and Policy Officer, Matt Peralt, Head of AI Policy, and Martin Casado, General Partner, take a first principles look at AI regulation, arguing that if policymakers want an effective way of protecting people from AI related harms, they should focus on targeting those harms directly rather than model development until AI's marginal risks are better underst. Drawing on decades of software governance debates, from encryption to cybersecurity, they explain why development level rules are difficult to define, easy to loophole, and likely to become obsolete in a fast moving field where even the definition of AI remains unstable. The conversation also examines how regulatory uncertainty is already shaping US competitiveness by chilling open source research, advantaging incumbents over startups and pushing the next generation of builders towards Chinese open models. Making the case for evidence based technology neutral policy that protects against bad behavior behavior without stifling innovation.
1:12
This is a fun conversation for me because I get to ask Martina Jay some questions about how you guys were thinking about AI policy before I joined the firm. So a couple of years ago the scene was really different than it is today. Sam Altman's testifying in Congress, Brad Smith at Microsoft is talking about things like licensing regimes for AI, an international regulatory agency that would regulate AI just like nuclear. International nuclear regulatory agencies do. Jay, can you just start with telling us a little bit about how the firm reacted to that? Like, how did we put that in context in terms of what AI policy might look like and what we were concerned about?
2:12
Yeah, I think that for us, the big eye opener was the Biden executive order that came out at the tail end of the Biden administration. And that order did two things that I think seemed very, very different to us than what had come before in the regulation of. Of kind of software, of computing. The first thing is it sort of made a nod in the direction of wanting to regulate fundamental math and computing power through kind of restrictions on the types of models that would use certain amounts of computing power. Right. Flops thresholds, I think it came to be called. And the second one was for the first time, a questioning of the value of open source software, or as they called it, I think, models with open weights. And the reason that that was such a shock to I think, many people who had been involved, you know, Martinez amongst them, But I think Mark as well was involved in earlier debates around the regulation of the Internet, regulation of software, regulation of encryption. And what I think was. Was new here was a skepticism, or at least a perceived skepticism, that the way that we had regulated software before, which was really to focus on regulating use cases as opposed to regulating underlying software development. And in the case of AI, that's, you know, regulating model layer development. And that's one of the reasons we became so actively involved, because that distinction that had served the country so well in terms of regulating uses has a long history in kind of regulatory law. We have typically regulated behaviors, human behaviors and bad behaviors, typically, as opposed to regulating invention, creation and the sort of development of things. And I think to go down that path raises some real problems in terms of trying to develop new innovative industries without really solving the fundamental problem of. Of bad actors using technologies to do bad things, which is, I think, the thing that people worry about. And the way that I would analogize this is if you think about what one of the biggest risks that we face in computing today, it's malware, it's cybercriminals, it's bad actors. And think about the way that we address that today. The creation of malware itself isn't in fact, a crime. What's a crime is the transmission of software to compromise other computers. And the reason is very simple in that a lot of the techniques a Lot of the coding that goes into creating malware can be used for pen testing. Right. Which hardens our system. It can be used for other things. And so it's very hard to distinguish, at the programming layer, at the. At the model layer, good uses from bad uses. But we can attack bad uses themselves. And that's what the Computer Fraud and Abuse act does. And it's what we've historically done in this space, and it's served us well. It's. You know, there are still bad actors that. That get through, but we've been able to address those concerns by. By focusing on bad activity. Martin will get into this more because he's our guru on most of these things, but it's even harder here. Where my understanding is, the coding involved here is actually relatively simple. Relatively. Not for me, but for folks like Martin, the math here is relatively simple. Again, not for a lawyer, but for people who are in math, it's vectors and linear algebra. And so to try to regulate that in the United States is kind of a fool's errand, because it will be developed elsewhere and to the detriment of the ability of our industry to innovate, and it won't fundamentally alter the ability of bad people to use these things. And so I think this focus is a really, really important one, and we shouldn't lose sight on the fact that regulating models is. Is not a particularly effective way of achieving what we want to achieve. And at the same time, it really hampers innovation. And so you want to do something that's effective, that achieves the ends you want to achieve. And I think that's where we should be heading. But I'll let Martin speak a little bit more to why that's the case with this particular technology, because I think there's some really interesting things that people.
2:51
Would love to learn.
8:07
So, Martin, I mean, this is a way to approach regulation here that's not just a little appealing to policymakers. It's really appealing to policymakers. Like the laws that we are weighing in on often regulate development. They very rarely regulate use. And so if I can try to channel my inner policymaker and give voice to, I think what they have in mind, I think the idea is we regulate cars at the state and federal level. We regulate the airline industry at the federal level. Like you, you understand software and software regulation really deeply. Why is that model of regulation not the right one when it comes to AI?
8:08
So. So I think that there's. There's two considerations I want to put out there. You know, given the Last couple years. The first one is when we got involved in policy efforts, the conversation was so lopsided. So this isn't directly answering your question. I'll get your question in a second. But it's really important to point out software regulation discourse goes back since the beginning of computers and we've dealt with some really heavy stuff like can you use Compute to make nuclear weapons? What is the implication of like the Internet? Right. We've dealt with some really heavy stuff when we've got a lot of policy as a result of that and a lot of doctrines and principles around that. And the weird thing about the conversation, at least last year, in the last two years, is normally you've got this kind of robust discourse with many voices represented. For example, academia has historically been pro innovation and pro research. Venture capital has been pro innovation and supporting for little tech. And then of course, you had policymakers, you had big companies, they had this very robust conversation. And then you kind of, as a result of that, you kind of figure out what makes sense for everybody. And the strange thing about this conversation is when we entered it, like one side just was not represented at all. Like academia was basically silent. VCs were doing this very strange thing, that kind of anti innovation, which I've actually never seen before. And so I think just to begin with, I just, I want us to all realize that where we are right now has not been the result of a robust conversation. It's been this very one side. We can talk about why that was. But like, and so a lot of what we're going to say here is just being like, we need all the voices in the room. We need to get back to equilibrium. This is how we've done everything in the past. Right. So I'm going to talk directly to your question. I just want to make sure like, you know, everybody understands like a lot of the reason we got involved just be like, hey wait, like maybe academia should be part of this conversation. Maybe little tech should be part of this conversation. You know, maybe like, you know, VCs which have historically been pro innovation, maybe like more of us should go ahead and talk. Okay, so the second question directly to your question is, so how have we handled these conversations in the past? Well, the first, the, the, you know, the, the underlying doctrine has been, we've been regulating software for decades and we have these platform shifts and to build effective policy, we have to understand the marginal differences. Right? That's been the way that we've done it in the past. I'll give you an example when the Internet came up, we actually had a number of big, huge shifts that happened, right? We had attacks that we'd never seen before, some of it underlie critical infrastructure. We even had at a nation state level this notion of asymmetry, which means that the more that you rely on it, the more vulnerable you were. Right? And so we had this entire discourse. So the discourse was not let's stop working on the Internet, it was like, let's understand the marginal risk. And then based on that marginal risk, let's go ahead and come up with a policy. And the reason you want to do it that way is because you trust the policy work that you've done to date. You trust that it stills apply. You trust that there's still these computer systems. And if you don't understand the marginal risk, you actually can't come up with effective policy. You like, it may not work. It may be for the wrong things, it may be counterproductive. And so there's been this very strange kind of almost doctrine shift in the AI wave where you go to the experts like Don Song at Berkeley and you're like, what is the marginal risk? What has changed here? And Don Song would say we don't know. That's a research problem. Well, if you don't know that and you do regulation, like how do you know it's even catching the bad stuff when you haven't defined it? Or how do you know it's not kind of enabling even worse stuff? And certainly regulation will potentially hamper or put a chill on innovation. So you don't even understand kind of where that trade off goes. And so, you know, I would say like, we have a long standing approach to dealing with software regulation. Historically we've kind of used that and then as new things happen, we build policy on that. Little bit different when what you're asking is a little bit different. What you're Jay saying about, like, because I would say actually we, we do regulate, you know, compute a lot and we, you know, like for example, if you put a computer in an airplane, like clearly like there's some regulations that go to it. So like you do add regulations depending on the deployment environment, but when it comes to net new regulations dealing with very specific risks, you have to understand the risks first. And this was part of the conversation that was missing.
8:47
So I'm curious how we draw the line and, and this is something, Martin, that you're sort of getting at here, like how we think about various different, whether certain things fall on the use side of the spectrum or on the development side of the spectrum. So, like, what about developers that offer applications? So, like, OpenAI builds a model and then has ChatGPT. So how do you walk, can you walk through, like, how you think about regulate, develop. Don't regulate development, regulate use, when it's mapped on to companies that do both.
13:28
Yeah, I mean, I guess I think, you know, we've got to be a little bit clear on, like this entire conversation. There's development, there's use, and then there's. There's risks. Right? So clearly if someone came up with a methodology that was damaging, you could show that was damaging, you would say don't, you know, and like, that actual methodology was part of the development, you'd say, don't do that.
14:02
Yeah, Fraud GPT. Fraud GPT. We wouldn't.
14:32
Well, now even, even that would be use. I'm saying let's, let's imagine that, like, you could develop something that, you know, you couldn't contain. You just like we just decided that, you know, this case, like you actually show that there is a risk that doesn't exist with today's computer system. That we have not shown that at all. Not at all. But if you did that, then you may have the conversation of saying, hey, listen, this is an entirely new thing. But today these are just computer systems and we have a very robust sense of regulation on top of those. Right. So until that happens, there's no point in regulating the development. Now, on the other hand, to your point, we apply these to all sorts of stuff, and people use computers for all sorts of stuff, and if those happen to be illegal, then it's very important that they obey the laws for those.
14:35
Yeah, Martina, is what you're getting at that. And we should just cut to the chase here. I think colloquially, things have been sort of divided into sort of the doomers who think that we're on the cusp of artificial and general intelligence that's going to create such a radically new type of computing Persona that it raises existential risks. And then there's sort of the people who are engineering these, these systems, many of whom are saying what we have today not only doesn't approach AGI, but it's not actually going to get to AGI. And there seems to be a divide in the conversation amongst people who are, who believe those two things, or at least profess to believe those two things. And is what I hear you saying that, look, if, if we were creating, and we knew we were creating Skynet, that would be one set of conversations. But where the engineering is today actually the emerging consensus is that that's not what we're creating. There are enormous engineering challenges to get there. The current versions of what we have aren't going to get there. I think Yann Lecun mentioned that recently.
15:31
Let's tie this entire conversation together. Maybe this is the synopsis. We can all agree you should not use computers to do illegal things. Right? So it's totally sensible to regulate the use of computers. Right. That said, the building of systems. Maybe you do want to regulate, maybe you do, right. But in order to know even what to regulate, you would have to understand the marginal differences. Otherwise you couldn't even come up with a sensible policy. Right. Like, like what would you even describe? And that is still an open research question. Yeah, right. So I mean this, this, this kind of summarizes the last two points that we've been making. It kind of unifies these two things. And so what have we done historically as an industry? As an industry, what we do is we develop, we study the marginal risk. I mean we've got a whole, you know, deep discipline in cybersecurity and then if we come up with marginal risks that are different, then we implement policies around those. But right now we don't have that. In fact, you know, the whole SB 1047 thing in California, which was, you know, we got very involved in which was trying to regulate large models. The conclusion which we've created, you know, an independent body for was, you know, and actually Don Song did such a great job because, you know, she is a, you know, you'd even put her in like the, maybe like doomer curious faction. So the doomer curious Don Song professor at Berkeley, longtime security researcher says we need evidence based policies, otherwise we don't know what to regulate. The question of marginal risk is still an open research question. Again, this is a world export. So let's focus on the marginal risk. And that's where we, and that's really where we are, I think in the consensus discord amongst, amongst many of the experts. But that's not what percolates up to the policy level.
16:44
So I really like your line about trusting the policy process to date because I actually think there's a lot of, that's where a lot of the disagreement is. People who have comfort that existing law will at least provide some ways to address risk and marginal risk and then people who don't. And then also I think there's some part of this community who have said like we tried the wait and see, let's look for research in social media, and that didn't work. And so I'm curious how you think about both of those components, like, what gives you confidence about the policy process to date? And Jay, I'd love your thoughts on that as well. And then also, like, when people say, let's not repeat the mistakes of social media, what are your thoughts on that?
18:50
I just want to make sure that we pose the question correctly, because this one is going to be very easy to conflate, which is there was no innovation for social media, right? I mean, this is literally like the Internet. And then people there was a use, and that use was social media. So in the context of this question, like, listen, would you give up the entire Internet, you know, rather than regulate social media if it turns out social media is bad? I think the broad consensus is, like, listen, social media may be bad for minors. Maybe we need protections there. Social media may be bad for different, you know, political systems. Maybe we need protections there. But that to me, is very much a use of the Internet. I don't know. At least I don't know anybody who's sensible is like, we should never have created the Internet because of social media. And I think that would be a vastly minority opinion if so. So I just want to make sure that we're not answering the wrong question here.
19:29
And Martin, kind of riffing off what you said before, I think that's a really good example, Matt, where bring yourself back to 1998 or even 2000, or even when, you know, Facebook, the college version launched. What would you regulate? Because everything that's happened since then was just a glint in the eye of, of, of all of these people, nobody had any conception of what social media was going to become. So to Mart, I think to Martine's point one, to claim that at the outset of the Internet, you could have foreseen how social media would develop, be used and misused is. Is. Is kind of a fairy tale. Like, that couldn't have happened back then. It can only happen once the risks emerge and are known, and then you can figure out what the bad things are that you want to regulate, and then you can have a robust conversation amongst different stakeholders.
20:22
Right? Right. And you could even, you could even say, like, listen, we were not aggressive enough. But again, that is about the use, right? Like, nobody is being like, you should have not have done Internet research. Nobody's saying that. But, but that. But that's what we're saying now. We're like, you should not do the core research. They're not talking about, like, the actual use and application of these things. And so I think anybody that thinks we got it wrong in social networking, drawing the wrong parallel here.
21:19
So, Jay, what, what about the enforcement tools side? Like you, you, amongst the three of us are the only one who's prosecuted cases, who puts, who's put together cases, who's looked to the legal arsenal that you might be able to utilize to try to address harms. Like, what gives you, to use Martin's phrasing, like, what gives you confidence, trust in the policy process that we've had today?
21:46
I think that it's just the fact that we become smarter at these technologies as we understand how they're used. You know, kind of going back to the early days when, when the Justice Department was trying to fight cybercrime. You know, there was a particular unit within the FBI through which all kind of forensic data and forensic tools funneled when, when you, when you seized a hard drive. And these units were few and far between. But as cybercrime became prolific in all types of crime, the tools that investigators had, and this is on the criminal side, we can talk about regulatory separately, but on the criminal side, the tools that every investigator had had to expand. And so today you're probably not a white collar investigator or a white collar agent of any sort unless you've got forensic tools and some sort of cyber experience in your background. And we know now a lot better how to investigate these cases, how to, how to find perpetrators on the regulatory side. Similarly, we know that in the early days, there was a lot of fear of encryption, right? And Martin, I think, lived through these debates. At the time, law enforcement was implacably opposed to encryption, wanted back doors built into it. And all the technologists said, look, that's a mistake, because backdoors aren't just used by good guys, they're used by bad guys. And not to mention that the Internet is fairly porous. You're going to need encryption if you want to do things like E commerce or send private information over the Internet. And that has borne out. E commerce on the Internet would be impossible without robust encryption today. And what solved the debate in some senses was, was PGP to a certain extent, once it became prevalent and useful and people could actually understand it. Has the encryption problem gone away? No, there's still conflicts between law enforcement and tech companies. But we figured out ways of navigating through this. Not perfect, but ways that, that don't hamper innovation, that, that foster, that don't kind of throw the Internet out just because a lot of Bad stuff happens on it. And we've learned to coexist with these tools through that. And so that experience gives me some confidence that we can work our way through this. Is it going to be messy or are there going to be risks associated with this? There are and there always are risks. And I don't mean to minimize the.
22:09
Risks.
24:44
But you have to deal with those risks as they actualize themselves as they emerge. Because the ability of the human mind to conceive of those risks in a vacuum and what you concretely do about those risks and is, is very difficult. It's hard to know how you would actually manage those things without understanding what the implications of them are. And I think we've, we've done that in various instances.
24:46
Yeah, I think this is exactly right. And I think a good way to metaphrase what J just said, which I totally agree, which is we've hit this equilibrium where we're balancing a lot of things like what good guys can do versus what bad guys can do, right? Like this is an equilibrium state. Innovation versus safety. This is equilibrium state. And we've developed, you know, a number of policies to do this equilibrium state. And within that equilibrium state there's a lot of work we can do. Like we may decide certain applications of AI are bad. Like there's a lot of stuff we can do with an equilibrium state. The question is, do we know enough to change that equilibrium state? Do we know enough to like handicap the good guys when the good people, when the bad people have access to technology? Do we have enough knowledge to handicap medical innovation for some precautionary concern? So I think that the reason that we're banging the drum and we have been, is not that we have some ideological bent one way or the other. I think the reason is we're like, there's this equilibrium that balances a lot of trade offs. And you should consider all of these trade offs. And if you're going to change that equilibrium, that could impact innovation, that could impact medicine, that could impact our ability to defend ourselves, you need really good justification. And a good justification is not, well, maybe it's dangerous, because this hearkens back to the precautionary principle, which has just not worked for innovation, certainly in modern times. So it really is an ideological or a methodological concern, not a issue, specific concern that we have.
25:09
And I think there are two data points there that are really important. One is, look who was the first out of the box with AI regulations. It was Europe, right? They put something in place that has had A hugely detrimental impact. And I'm sure Martin could speak more in volumes in terms of Italy ban chatgpt. Yeah, you know, the access to these things. But they did it and they've had to walk a lot of it back. The EU just recently came out with a recognition that the AI framework is flawed and they need to, to walk it back. Now, I don't think they're going to repeal every single regulation on the books, but it shows the dangers of moving into this world. And now the thing that they're most concerned about is that the US and China will be producing all the models and Europe won't, and their people won't even have access to some of these things. So that's, that's data point.
26:52
I think it's exactly right if you look at how the equilibrium is. So we have, listen, we have played with equilibrium. We have, like, we have chilled development. We've, we've had a different approach. And I think the net of it in retrospect is we've given China a head start. And now if you look at the use of open source AI models just as one sector, I'm not saying open source is particularly good about. Just look at that as one indicative sector. They are currently dominant. And so I just feel like we have lost the ability to consider all implications of this. And that's what we're just trying to get a robust conversation back in.
27:49
Yeah. Marti, can I ask you, Can I, can I ask you a question on that? I'd love to follow up on that question, the open source question, because it seems to me that, you know, if, if you wanted to take a skeptical view of this, you could be like, well, you know, China was just better at this stuff. It doesn't really have to do with, with, with regulation. But I think you've been seeing what happens on the ground with researchers when they're faced with this kind of uncertain regulatory environment and what it does. So could you kind of put some, I guess, some concreteness behind that? Like what is the sort of regulatory, you know, the fact that we have, as we've spoken on other podcasts, we have, you know, 1200 bills in the states, we have a Biden EO that existed, no longer exists, but we have now federal legislation that may come into bear, and there's a bunch of bunch there. So how does that play into people who are developing this software? And how does it lead to a chilling effect? Because I just think that some sort of concreteness around that would be helpful for people to understand yeah, and I.
28:29
Listen, I don't want to trivialize a situation. These things are incredibly complex, they're multifarious, they include a lot of stuff. Right. And so like, you know, and I think to try and reduce it to oh, it's just regulation is incorrect.
29:28
Right.
29:39
But I do think it matters. So let me start by saying the leader in AI is the United States. Like the United States closed source models from OpenAI and Anthropic and Google and the rest are the best in the world and nobody's even closed. Right. But you know, closed source or proprietary systems are, tend to, tend to be the primary capital drivers as far as business, but they're not the primary innovation drivers like hobbyists, researchers, academias, A lot of the industry tends to get pushed forward from open source and that ends up becoming the future. Right. We saw this with operating systems, Linux very famously so it's a very, very important part of the ecosystem. So, so can you give us a.
29:39
Sense of like, what's the impact of regulation on open source development? Like open source seemed under threat a couple of years ago. Then there's this deep seat moment and everyone sort of backed off.
30:22
It's really weird because the people talking about against open source, like not the people you'd expect. It's like researchers on these are like always been the champions of open source as VCs and so it's been a very strange time. So here's the thing. The United States is the lead in AI by far. We have the best models, we have the best proprietary models, we have the best soda models, but they tend to be proprietary. And China has done a great job with open source, which you'd actually expect from a number two. This is the classic case in technology is like the leader does the proprietary version, you can run faster that way, you can verticalize better that way. And then the number two, you know, well, we'll do the open source as an approach. The question is, is why have we not caught up or why don't we have any? Like right now China's running away with open source models. I think a lot of it has to do with regulations and fear of legal action due to things like copyright. So for example, when you do see these big labs release open source models and they're the ones that would release the best ones, a lot of the time you could tell they were generated with synthetic data. And why would they do that? Well, I would surmise that the reason you do that is because you don't want somebody. And we have a lot of spurious lawsuits like this, just basically trying to pull out proprietary images and then suing the company. So there's actually a lot of kind of risk to putting out open source models if you're a US company. And so I think we're already seeing a bit of a chilling effect based on the uncertainty. And I would say more than any given rule, the uncertainty of where the landscape is going to go has had a chilling effect on our big labs and our big companies, which all are the leading AI organizations in the world, from putting out open source models. So I think that's very, very clear at this point. So instead they stay proprietary. So we're still in the lead. But the problem is open source is always a critical part of the innovation ecosystem because while it's not the number one business driver like the proprietary models is, it's what's used by hob, it's what's used by academics, it's used by startups, and that tends to be the future. And so this uncertainty in the regulatory environment is keeping US companies from releasing open source models that are strong, which is very, very clear. And as a result the next generation, the hobbyists and the academics are using Chinese models. And I think that's actually a very dangerous situation for the United States to be in.
30:32
Can you talk a little bit about what's the timeline for when you see effects of this? One thing that I feel like we often hear in tech policy is people will say such and such happened and the sky didn't fall and they're looking out like 2, 6.
33:03
So my, my job, listen, my, my job, my job is to sit and companies come in and they, they, they, you know, they pitch us, you know, the next generation of companies. And of Those, you know, 1 in 10 is going to do a great job and 1 in 20 is going to be a very successful, you know, company and maybe 1 in 50 would be a public company. So this, I, this is a view into the future. And if they're using an open source model, I would say 80% of the time is a Chinese bottle. And listen, we can opine on why that's dangerous. I mean, here's a very simplistic description without going into sci fi or any sort of scary instance. If I'm a Chinese company that's providing open source models and I want to have an advantage, I just keep the largest model and give myself a six month advantage with it before I release it again. And everybody's dependent now on you and you have a built in head start for the most powerful models as they come out, it's just so trivial to see why this is a bad idea for the United States and a bad idea for the ecosystem. And so I will say today is absolutely the case. They are dominant for open source models, just open source models. But that's such an important piece. And the advantage, again, without going into like scariness about backdoors or any of the other stuff, literally just release cadence, they can put United States startups behind.
33:17
There's a political component too, that we just have to be aware of, which is that it matters very much not just for the dominance of a particular country or a particular set of industries that the US Kind of leads in this area. If you think about the implications that open source had more broadly for soft power for open societies, I think of AI as that on steroids because it operates as sort of a control layer to computing and will be the interface that human beings interact with computers going forward. It can shape the way that information is produced and shape the way that information is perceived. And so I do think that it matters very much whether.
34:31
Sort of the.
35:23
Advantage goes to China and models that are sort of imbued with, with its values versus a much more kind of open view of information and the way information should be kind of shared and disseminated. And look, it's a trivial example, but I think an important one in terms of kind of the Tiananmen Square and how that's represented in Chinese models versus elsewhere. And I think that you can see that playing its way out over the decades as AI becomes more important. So I just think that there's a geopolitical angle here that the US really should be taking very, very seriously. And in some ways, China's playing, it feels to me, at least, a different game and a game that they played with Huawei, with others, which is they understand that adoption is the thing that matters, right? If you want to create network effects, if you want to create dominance, adoption matters. And open source is one way of achieving that adoption. And while we are sort of bickering about, you know, which states should regulate whether the federal government should do this or that, they're off achieving a level of penetration and a level of adoption that puts them ahead. And even if they're not the best models, but the best models that people want to use, it's kind of like the Betamax versus vhs, you know, it doesn't matter. In some senses, the best model is one that's cheap enough that people can use and can put into their products. And that will have implications down the road. And I think that that's a super important risk that's sometimes underestimated as we talk about the other risks. And so to Martin's point, policymaking is a trade off always. There are always puts and takes. And you know, as somebody who has sort of studied economics early in my academic career, it tells you that there are no perfect solutions or just equilibria of trade offs. And so I think the robust debate is the important thing. And I do think that there are large kind of portions of this that aren't being debated enough in furtherance of risks that, that feel at least more remote than the risks, the prosaic risks that we're seeing happening every day and that we should be addressing more urgently. And this isn't a call for no regulation. I think that's the canard that's often thrown around. But what's going to be effective and what's the right balance between different competing objectives?
35:24
Yeah, this and this is the perfect example of why the equilibrium is so hard. And for even the smartest people it doesn't follow intuition or reason. And therefore we should kind of focus more on precedence. Right. So there's this question is like why should we focus on precedence? This is a great example. So historically we would not be anti open source, historically at least for like, you know, a large portion of the constituency. And, and some very, very smart, you knowledgeable people were anti open source on this wave and they would write articles and they say, hey listen, because if we do open source, we will enable China. So it's a departure from historical precedents. The rationale which appeals to intuition is we're enabling China. And the result is, is is counterproductive. China actually got ahead. Right. And so I, I think when we have these new platform shifts with new technologies, you know, we should follow what we've done in the past and then if we're going to have a departure, we should really understand why it's different in order to do that. Because these equilibriums are so tough to understand outside of it.
37:52
So why does all this matter for little tech? That's the focus for us. So when we talk about regulating use versus regulating development, what is the reason that a development focused regulation and the kinds of stuff we've seen is like flops based thresholds for impact assessments or audits or determining liability. You know, the kinds of, kinds of proposals that really like affect people at the, at the stage of the company development that Martin, you see so much like people trying to Get, Yeah.
39:02
I think the bottom line is uncertainty really is death. And in startups and we see it all the time. So for example, in the last two weeks I actually had an AI, very AI forward VC pull a term sheet from a startup which is probably going to kill the startup because they're just uncertain about the regulatory environment. And so I think it's not the regulation yet even that's problematic. The problem is because we don't know what to regulate, because we don't know what the marginal risk is. There's just a bunch of proposals out there. We don't even know where they're going to land. We don't know how to think about it. We have no framework. And that has really chilled certainly the funding environment. But there's also true for the customer adoption environment, it's true for the hiring environment. I will tell you, I hire engineers all the time into AI startups, the regulatory question comes up all of the time. And so there's kind of no aspect of a startup that isn't impacted by uncertainty in the regulatory environment. It's exactly the environment providing.
39:31
I think that the important thing from a startup's perspective is that they focus on building things right. Like the thing that the startup is most worried about is not existing tomorrow. And obviously startups don't want to break the law, so they're going to follow whatever laws are on the books. The problem becomes what the opportunity cost of that is. The reality is that if you're a large company and have resources and are well funded and can deploy armies of lawyers, your innovation will slow down, but it's not going to stop. And if you're a startup and you're just a couple of people, it becomes very, very hard to operate in an environment that has layers and layers of regulations, different regulations with different states layered on top of uncertainty at the federal and state level. You're sort of left with, okay, what can I do and what can I do when all I want to do is build? And that really does give an enormous advantage to, I think, incumbent players that are well resourced, that have revenue streams, that have adjacent products where they have already product market fit. If you're Google, you've got your advertising cash cow, so you know, you can do whatever you need to do to invest in this space. But if you don't have anything and you're trying to create an industry from scratch, to me at least that seems to speak for itself in terms of how those two types of companies would fare in a regulatory environment that's uncertain and that's prohibitive or at least dissuasive of innovation.
40:35
So Jay, you're articulating the things that we should stay away from. Regulating development, regulating the underlying innovation. When you're sitting across from a lawmaker who says I'm hearing from constituents that they're concerned about this technology we need to do, I feel an urgent need to do something to protect my constituents. What's the, what are the kinds of things that you say in response?
42:09
I think it's a twofold inquiry. I think the first thing to really explain is that there are a lot of regulations and laws that already apply to AI. They're general purpose use restrictions on all kinds of bad activity and you don't get a pass from them just because you happen to be using AI now. And candidly, if you were to create a very specific LLM focused model of abuse in about a generation of models, it's going to be irrelevant because the definition won't apply to the world models that are now being produced or whatever is going to come after that. And so I do think that the response is to say, look, let's figure out what is prohibited today and what you're afraid of. In Martin's language, the marginal risk, right? What is not covered today by the laws that we have in the books. You can't discriminate using models, you can't do unfair lending using models. You can't, you know, differentially hire. And we've seen examples where hiring was affected by these models and they were pulled back. All of those laws still apply and you don't get a pass just because you have an AI model. That's, that's, that's being used to implement your, your, your practice. So that's the number one thing. And so where are the gaps in those laws? And, and let's identify those gaps and there may very well be them, but let's see what they are and then sure, you should pass laws, and this is the key part of it. You should pass laws of general applicability that are technology neutral so they don't become obsolete pretty quickly, especially with a technology that's innovating so rapidly. You want to have general use tech neutral types of laws. Let's figure out what those gaps are and let's pass those laws. And those will tend to be use based laws for the most part. And again, this kind of gets back into a question of federal state relationships that I know you and I have written about. Some of those are going to be passed by the states. States have plenary police powers to protect their citizens for, you know, consumer protection purposes, for all sorts of purposes. Some of them will be federal. And that balance between what the federal government should do and what the state government should do will be sort of hashed out, I think. But you've got to focus on where the gaps are in the law as opposed to assuming that, you know, I gotta run, I gotta do something, I gotta pass something which might end up confusing things more than anything else. And so I think that's the key is, is what is not covered today that you want to be covered, and how do you put a framework in place that that addresses the bad uses, the misuses of these things?
42:28
I think the last thing that is really important to impress on lawmakers is if we focus on development and we don't focus on use, you end up introducing tremendous loopholes because it requires you to describe the system that's being developed. And right now there actually is no single definition for AI. And everyone we've used in the path now looks totally silly because it's evolving so quickly. So actually the lawmakers want to have effective policy. The only area that you can actually specify is the use of these things. And so it's kind of a fool's errand even on the surface of it, to try and do it based on a set of actions because it is just not a single system.
45:10
Jay Martine, thanks so much.
45:53
Thank you.
45:55
It's been a pleasure.
45:56
Thanks, Matt. Thanks for listening to the A16Z podcast. If you enjoyed the episode, let us know by leaving a review@ratethispodcast.com a16z we've got more great conversations coming your way. See you next time. As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see a16z.com disclosures.
45:56