What Everyone’s Getting Wrong About AI, with Arvind Narayanan
49 min
•Oct 16, 20256 months agoSummary
Arvind Narayanan, computer science professor and author of 'AI Snake Oil,' argues that AI is a normal technology whose transformative impact will unfold gradually over decades, not years. He challenges the hype around AI's capabilities while warning that capitalism's incentive structures are causing harmful deployment patterns where companies profit while society bears the costs.
Insights
- AI's real bottleneck is deployment speed and institutional integration, not technological development—regulation and existing systems will naturally slow adoption in critical sectors like healthcare and law
- Benchmark performance (like bar exam scores) is a poor predictor of real-world utility; AI excels at measurable tasks but struggles with complex contextual work that defines professional practice
- The AI bubble may burst within 2-3 years due to investor timelines and GPU depreciation, but underlying technologies could still deliver value over decades—similar to the dot-com pattern
- Current AI deployment often exploits broken institutions rather than fixing them; hiring automation systems work not because they're accurate but because they provide cover for already-dysfunctional processes
- Academic independence from tech companies is critically eroded in computer science, unlike medicine, creating conflicts of interest that shape which AI research gets funded and published
Trends
Shift from capability-focused AI development to deployment-focused policy discussions among serious technologistsGrowing skepticism about AGI timelines among credible AI researchers; diminishing returns on LLM scaling becoming apparentAI integration creeping through institutional back doors via incremental adoption rather than dramatic transformation, reducing visibility of systemic changesRegulatory capture risk: deregulation pressure from Silicon Valley targeting FDA approval, liability laws, and professional standards in high-stakes sectorsDecoupling of AI hype from economic fundamentals; AI stocks masking underlying economic weakness and unsustainable debt levelsDemand for independent academic voices on AI ethics and safety; recognition that tech-company-funded research cannot serve as external accountabilityReframing of labor displacement: focus shifting from job elimination to job transformation and demand elasticity in professional servicesSection 230 liability reform emerging as bipartisan concern; algorithmic curation increasingly viewed as editorial responsibility rather than neutral hosting
Topics
AI Bubble Risk and Economic ImpactAI Deployment vs. Development SpeedBenchmark Gaming and Real-World AI PerformanceAI in Legal Services and Professional Labor MarketsHiring Automation and Algorithmic BiasAI Safety and AGI TimelinesAcademic Independence and Tech Company InfluenceRegulation of AI in Healthcare and MedicineSection 230 and Algorithmic LiabilityAI's Impact on Scientific Research and DiscoveryData Access and Private Company ControlCapitalism and AI Profit DistributionCreativity vs. Randomness in AI SystemsChina-US AI Competition and Deregulation PressureCopyright and AI Training Data Rights
Companies
OpenAI
Mentioned for developing GDPVAL, an expert-graded evaluation system for AI performance on real-world professional tasks
Do Not Pay
Startup that falsely claimed to build a 'robot lawyer' and made misleading claims about AI legal capabilities; faced ...
NVIDIA
Discussed as example of AI bubble scale; market cap of $4.56 trillion dwarfs dot-com era collapses like pets.com
Meta
Mentioned as leading social media company subject to expert witness litigation regarding Section 230 and algorithmic ...
Google
Implied as leading social media company subject to expert witness litigation regarding Section 230 and algorithmic li...
Amazon
Implied as leading social media company subject to expert witness litigation regarding Section 230 and algorithmic li...
People
Arvind Narayanan
Guest expert; author of 'AI Snake Oil' and 'AI is a Normal Technology'; skeptic of AI hype and AGI timelines
Bethany McLean
Co-host of Capitalisn't podcast; leads discussion on AI's economic and societal impacts
Luigi Zingales
Co-host of Capitalisn't podcast; focuses on capitalism and institutional economics implications of AI
Ted Chiang
Quoted for insight that fears about new technologies are often really fears about capitalism and benefit redistribution
Mark Andreessen
Cited for argument that slowing AI development costs lives and creates moral obligation to accelerate
Jeremy Kahn
Wrote analysis of OpenAI's GDPVAL benchmark showing AI legal work approaching lawyer-level performance
Michael Sembalist
Cited for data showing AI stocks account for 75% of S&P 500 returns and 90% of capital spending growth
Quotes
"Fears about new technologies are often really fears about capitalism. How is this going to reallocate benefits and costs in our society?"
Arvind Narayanan (quoting Ted Chiang)
"Broken AI is often appealing to broken institutions."
Arvind Narayanan
"It's not a comparison of AI versus purely human skills, unaided by technology. It's always AI versus human plus AI."
Arvind Narayanan
"We have a lot of agency as individuals, as companies, as institutions, as policymakers, in how we choose to deploy these things. And so these risks are going to be realized when we deploy AI systems, not just when we develop them."
Arvind Narayanan
"If that's what you really wanted to do, you would be putting a lot more effort into making AI actually useful with regard to some of their scientific limitations as opposed to making them useful for everyday users."
Arvind Narayanan
Full Transcript
So one possibility is that we are in a bubble, that bubble bursts, but that over a period of the next couple of decades or so, we gradually do manage to productively deploy a lot of the applications that are leading to this moment of hype. And that would in many ways be very similar to the dot-com bubble. I'm Bethany McLean. Did you ever have a moment of doubt about capitalism and whether greed's a good idea? And I'm Luigi Zingales. We have socialism for the very rich, rugged individualism for the poor. And Mrs. Capital isn't a podcast about what is working in capitalism. First of all, tell me, is there some society you know that doesn't run on greed? And most importantly, what isn't? We ought to do better by the people that get left behind. I don't think we should have killed the capital system in the process. If we're examining American capitalism today, there is hardly anything more important than AI. In the first half of the 2025, AI-related capital expenditure contributed 1.1% to US GDP growth, outpacing US consumer spending as an economic driver. Investment in data centers construction is projected to surpass investment in traditional office building in the same year. And 71% of equity venture capital investment this year are in AI-related industries. AI also is a huge contributor to the stock market of late. And so all of this implies that if AI turns out to be a bubble or more simply an overhyped technology, the US economy could crash down very fast. Those who are old enough can remember the hangover from the dot-com bubble. It was not pretty. Of course, you cannot remember. Very funny. Yet there is another deeper reason why we at Capitalist are interested in AI. As today's guest wrote in his book, is ultimately the fear of Capitalist, the fear that capitalists incentives will destroy any gut rail to the development of AI that fuel the fear of AI. Many AI experts end up being AI advocates or even worse AI profits. I searched hard for an AI skeptic and I landed on Arvind Narayanian, who is the professor of computer science at Princeton University, and a co-author of this very influential book called AI Snake Oil. He also wrote a recent article with a less snazzy name or a less pejorative name entitled AI is a normal technology. Back in 2016, he wrote a book about Bitcoin and cryptocurrency technologies, but now he's moved his interest to the space of AI. But these titles are a bit of a stretch. His book does not say that all AI is Snake Oil, but only some of it, predictive AI. And his article qualifies that AI is as normal as electricity, which was a huge deal. Those AI doing transformation of the economy as with electricity before will likely unfold gradually across decades, not years. His article has started an active conversation with AI 2027, which is an article written in April 2025 by another scholar who predicts that fully autonomous AI agents will be better at humans at just about everything by the end of 2027. And AI, this piece, imagines its impacts on the economy, domestic politics, and international relations. So the dialogue is basically around just how transformative AI will be. So without further ado, let's bring Arvin into the show. In the book, you talk a lot about AI and capitalism. Why this is so important and what is so special about it? This is a quote, I think, from Ted Chang. If I remember correctly, fears about new technologies are often really fears about capitalism. How is this going to reallocate benefits and costs in our society? And throughout the book, we document over and over, not just in this general abstract sense, but specific ways in which the haphazard development and deployment of many AI tools and technologies has been such that companies get the profits, but the costs are often borne by others. A classic example, of course, is that chatbots were unleashed upon the world, and then overnight you have educators both at the K through 12 level and at the college level scrambling to figure out how to change their testing practices, their curricula, really all within a period of a year or even a few months, because now you have an epidemic of students using chatbots. So that's just one example of privatized profits and externalized costs, but this is a very common phenomenon, I think, in modern AI. So when you think about some of the traditional breaks on technology being deployed, do you worry that with this arms race that we are entering into with China and this increasing conflation between the government and the big technology companies in the sense that this government seems to have decided that it's a matter of our survival to beat China, which you can extrapolate into, meaning turn the tech companies loose, let them do whatever they want, that some of those traditional means of slowing down deployment are going to get overrun. And I guess more broadly that this is leading to a confluence between at least a certain type of corporations and government that is perhaps dangerous for the future. That is absolutely something to worry about. And in particular, when I hear rhetoric about a Manhattan project for AGI, for instance, AGI, being artificial general intelligence, I do think that's deeply misguided. We've written an essay, for instance, about whatever benefits we imagine they're going to be from artificial general intelligence. It's not going to be realized in one moment where you've built some silver bullet technology, but rather over a period of decades as you diffuse that technology throughout society. So a lot of this thinking, in addition to this problematic confluence, I think, is very short-termist, as if the future beyond the next few years doesn't really matter. So that's another way in which a lot of this thinking is deeply troubling. But that said, let me also suggest some reasons for cautious optimism. While in many ways, I think the US is lagging behind on regulation of AI, we're seeing very clear harms, for instance, from chatbots and teen mental health and so forth. But at the same time, I think in a lot of cases, the harms that might potentially arise are in sectors that are already highly regulated. And that's something that people often forget when they talk about AI regulation as a wild west. A lot of people are worried about irresponsible use of AI in medicine, for instance. But when you look at the evidence, actually, the medical field is extremely conservative when it comes to technology adoption, simply because there is so much regulation. Any medical device has to be approved by the FDA, whether it uses AI or not. There are professional standards. People can get sued if they over-rely on AI tools, whether it's traditional machine learning tools or chatbots, in the context of health care. That's the reason why when I look at some of these panicky headlines like, oh, two-thirds of doctors are using AI, and you dig into the American Medical Association survey from which those headlines come, the data is actually very healthy in terms of what actually doctors are doing with chatbots and these AI tools. Most of them are using it for things like transcribing dictated notes and so forth, which is very much the kind of thing doctors should be doing, even though there are some risks. There can be guardrails around those so that more human time can be freed up for better quality medical care. I'm not seeing any evidence of doctors just yellowing it and abdicating their responsibility to patients and delegating their care to chatbots. That sort of thing is not happening. It's a very conservative, very regulated, very responsible sector. So in reading your book and your articles, I am a bit confused, and this is not your fault, it's my fault, a bit confused of what are the following things you are saying and maybe all of them or feel free to pick. Number one is I read that, yes, AI is good, but it's not that transformational as many people make it to be. So that's one point. The second is it's going to be slowly adopted. In part is because of human inertia, but in part because of, as you described just now, political resistance of some level and existing regulation. But the two things must go hands in hands because if you think that there are some people, the accelerationists, that think that AI is the greatest thing that will ever happen to humankind, they might not disagree that there is political resistance, but they want to eliminate it all. And this is part of what the Trump administration is doing is to get all regulation, environmental, protection of consumers and all the stuff. And somebody might say, this is what we need because we need to reach as fast as possible AI and anything that gets in the middle to be eliminated. Well, I don't think you misunderstood our views. I think we're very opposed to that. I understand that some actors have a strongly deregulatory approach and we disagree with that. We not only disagree with that normatively, this is not a thing we should do. I don't think it is happening very successfully. Certain kinds of environmental regulation are one thing, but again, it's not as if you can transform the medical system so that there's no regulation of medical devices or remove liability laws so that doctors engaging in malpractice don't get sued. That's not really a thing you can do and get away with. So yeah, we're not really... This is where, sorry to interrupt you, but this is where I think there is some wishful thinking in your part. I don't think that is something that is completely out of the feasible set. We know already that some people are thinking about this little nation state in which everything is permitted, including medical devices without FDA approval, certainly no liability for doctors, there is a part of the Silicon Valley universe that is really, really going that direction. I'm feeling that unless you merge this with the fact that actually AGI is not that feasible or not that great, your warnings are actually fuel to the fire of saying we should eliminate more because you're saying, don't worry because there is regulation, but they say, actually, we do worry because we want to accelerate when I eliminate all that regulation. Yeah, I mean, a lot of what the book is about is talking about the importance of regulation, of having regulation, having more regulation in many cases. It's not, don't worry. I don't think that's the theme of the book. There is a lot to worry about and that's why we wrote the book. I also don't doubt that many people in Silicon Valley do want a completely deregulated environment. I mean, I don't think they're close to getting it, but I understand that they are pushing for that. And I think we should push back and part of why we wrote the book is to help us push back by pointing out that we're not going to get to some kind of utopia and a short time frame. And therefore, if we dismantle our civil liberties, we're going to pay a lot of the costs without reaping any of these promised benefits. So in fact, we hope that our book can play a small part in resisting those crazy efforts. And the last thing I will say is that if they succeed in doing that, to me, it's not so much an AI problem. That's a democracy problem. That's a much bigger conversation. And it's not so specific to AI harms at that point. So just to keep pushing on this point, Mark Andreessen, of course, has famously stated that slowing down any AI development will cost lives and that it's this moral obligation to accelerate it as fast as possible. Is there anything in what he said or anything in what anyone has said that has challenged your view or made you think twice about the sort of core idea that it will be slower and slower is better? Sure. I mean, there are lots of potential benefits from AI. We consistently acknowledge that. We repeatedly call out, for instance, self-driving cars as having the potential to save a million lives per year, which is the number of lives that are lost in car accidents throughout the world. There are many things to figure out, labor impacts, certain new types of risks that they might introduce. But I do think all of that is worth it because of the sheer number of lives saved at the end. We try to be clear about that in the book. That is not a call for conflating all different kinds of AI and arguing that no regulation ever has its place. One of the central things that the book tries to do is break apart different applications of AI so that we don't lump it all into this one umbrella and be clear about what application we're talking about, what the benefits and the risks are. And the final point I will make is that for us, a lot of this comes down to the speed of deployment, not so much the speed of development. In our view, it doesn't matter that much if we accelerate or decelerate the development of general purpose AI systems. What matters far more is the speed and the nature of the integration of those AI models and systems into our institutions. So we'd much rather have the conversation of should that proceed faster or slower? And the answer is it depends. In many cases, there is an argument for faster, such as when it comes to self-driving cars. In many other cases, there is an argument for slower. So on this theme of development versus deployment, you've talked about this idea that AI as it is now can create sort of a set of prizes based on an ability to beat a benchmark. And so, but the easier task is to measure via benchmarks, the less likely it is to represent this kind of complex contextual work that often defines professional practice. To explore that more, can you maybe talk about it through the lens of law? Because that's one area we're on the surface you might look at and say that whole industry, particularly at a junior level, is going to get wiped out by AI because AI can do all the brief writing and everything that used to require an army of more junior lawyers. And that also feeds into a question of if you don't have junior lawyers, how do you train senior lawyers? But I'm going on a tangent. If not law, what would you use to best explain that idea or explore that idea? Yeah, definitely. A couple of years ago, there's this company, Do Not Pay, that I think started out as a way to dispute parking tickets and things like that. But they had bigger ambitions and they claimed to have built a robot lawyer. Their kind of pitch was that if any lawyer abused AirPods to argue a case in front of the Supreme Court, just repeating what this robot lawyer said, the company would pay them a million dollars because of their service for demonstrating the superiority of this robot lawyer. Now, this was never real. They surely knew that this was nothing more than a publicity stunt because electronic devices are not even allowed in the Supreme Court. So this was never going to happen. But it went viral because for a lot of people, it at least seemed plausible that this company had managed to build a robot lawyer. There was none. They got into trouble with the FTC. You can read the complaint in detail, all kinds of misrepresentations and lies. There was no robot lawyer. But why did it even seem believable? Because there is so much hype from companies themselves, but also press reporting that conflates performance on benchmarks like the bar exam with AI being useful in real-world legal settings. And the simple fact is that lawyer's job is not to answer bar exam questions all day. That is a good example of what I mean by the simpler the task is to measure via benchmarks. So bar exams are something that you can grade with yes or no or multiple choice answers. And so they tend to be heavily relied upon in the evaluation of AI for various things. But they tend to be very different from the kinds of tasks such as brief writing that lawyers actually do for most of their day. And when you look at those more complex tasks, it's hard to autograde. You need actual experts to be grading how well AI is performing at those things. And only very recently are we seeing credible efforts to make those kinds of measurements. So OpenAI, for instance, has something called GDPVAL where they actually pay experts to grade AI performance on various reward tasks. That's extremely expensive to do, right? Thousands of times more expensive to do than automatically measuring AI performance on some benchmarks. So that's why we've been very slow to see these kind of more valid ways of measuring how AI can be useful in the real world. And let me make one final point on this. AI for a Law is one of my favorite applications to talk about because it helps illustrate many fallacies when it comes to, OK, yeah, so maybe AI is improving at doing things like brief writing. What's going to be the impact on a legal profession? These fears are not new. More than 15 years ago, there was a New York Times article looking at some of the simpler types of AI and digital technology that could do a lot of the paperwork that lawyers do and predicted that a number of lawyers is going to go down. In fact, it's gone up a lot in that time. And there's a very simple economic framework for looking at this. Instead of looking at AI's impact on the supply of legal work, let's look at the demand for legal work. So if it becomes cheaper for lawyers to produce a unit of legal work, what's going to happen? The number of lawsuits filed is going to go up because that has now become cheaper. The simple fact is that there is not a fixed amount of legal work in the world, and that is true of many different domains. The demand is actually highly elastic, I think, as the term that economists would use. Many of you can probably tell me if I'm using the right terms or not. And so these simplistic ways of looking at AI on the labor market, I think, are very unfortunate. They've led to a lot of confusion and misinformation and should be rejected. It's never a comparison of AI versus purely human skills, unaided by technology. It's always AI versus human plus AI. And it's unclear why a human plus AI team would be any worse and usually would be better than AI acting alone. So full automation in most professions, I think, is going to be very much the exception rather than the norm. You have a quote you've used a couple of times in your writing switch. I'd love to have you explain. Broken AI is often appealing to broken institutions. What do you mean by that? Way back around 2018 or so, and this is one of the things that eventually led to us writing this book, I observed that hiring automation companies were advertising their supposed AI systems that could do what are called one-way video interviews. So the pitch was these AI companies would go to HR departments and say, look, you're getting so many applications, maybe a thousand for each open position. You can't manually review all of them. So just have each of your candidates upload a 30-second video. And our software is going to analyze not even the content of what they say about their qualifications for the job, but body language, facial expressions, various other things in order to predict people's personality, job suitability, and that sort of thing. And it was very clear to me as a computer scientist that there was no known way in which this could work. I started calling out this kind of thing and that resonated with a lot of people. And eventually we realized that there was a lot more of these types of AI products that are not just overhyped, but as far as we can tell, don't seem to really do anything. I called it an elaborate random number generator. But the puzzle to me was why, despite thinking about it for a few minutes, should make clear that this can't possibly work. This was so appealing to so many of the people who were buying the stuff. And it made me realize it's not because they're getting fooled. It's because even if it is a random number generator, it actually works for their purposes. It allows them to have some seemingly objective way of saying, this is how we filtered down these 1,000 applications to these, I don't know, 20 applications that you can then do a manual interview of. And they never use it for the kinds of jobs that they really that are valued and paid highly. It's for things like customer service and tech support and these types of things where they don't really seem to care that much about candidates and finding the best candidate for the job. So our view was that this is one example and there are many other examples. There is already something broken about the process. The hiring process is not working, whatever process they have now, it's not allowing them to identify amazing candidates. So their view is, if this pretends that this AI system is unbiased and accurate, is allowing them to cut down on their costs while essentially continuing to do the same thing they were doing before, not care too much about which candidates they were hiring, then it's a win-win. And unfortunately, this kind of thing seems to be all too common. So it's not merely a matter of saying, this AI system doesn't do what it's supposed to do, but looking at the underlying system, seeing what's broken about it and whether that can be fixed. One of the things I've been thinking about this might be too basic a summation, but that one potential cost of AI is what it wipes out mainly in the form of jobs, but another potential cost of AI is what it prevents. And that will be a more invisible cost in the sense that we won't be able to see what we're losing, it'll just be lost. And so something you've written has really stuck with me, this idea that it may prevent real scientific breakthroughs, because if we start to rely on this sort of endless loop of iteration in science of just trying to squeeze more out of what we know instead of that real creative leap toward what we don't know that has characterized most of the great scientific discoveries, that AI will actually stymie our progress rather than advance it. Is that, am I summarizing that well? And how do you think about that? That's right. I do think AI has great potential for science. If it is used right right now, I don't think we're on that path. I think right now, as a scientific community, we are misusing AI more than we are using it responsibly. And that's for a few reasons. One metaphor that I like is adding more lanes to a highway to try to fix a traffic problem when the real problem is that there is a toll booth that everybody has to pass through. So by adding more lanes, you're only incentivizing more traffic, which is going to make the congestion worse. In many ways, I think that's what we're doing as a community with AI for science. So here's what I mean. What are some of the real bottlenecks to scientific progress? It's not producing more papers. We already produce millions of papers per year. That number has been growing dramatically over the decades. It's not leading to true breakthroughs. And there has been a lot of hand-wringing about this already in the scientific community. And one pattern that's been observed is a paper by Chu and Evans. Because when a field gets larger, when papers are coming in at a bigger rate, it actually gets harder for scientists to wrap their heads around everything that's going on in that scientific community. So they have a tendency to gravitate towards the most prominent, the most already popular, the most central ideas in their field. And this actually slows down the entry of new ideas that might be radical at first, that might challenge the currently prevailing ideas. And we've seen throughout the history of science that the new radical idea, the idea that the Sun is at the center of the solar system, for instance, when it was first spoken, it was almost unthinkable. So how do you go from radical fringe idea to the new consensus? That's the real bottleneck. It's not accumulating more facts within existing paradigms. It's realizing that something you thought you knew is actually wrong and moving to the next paradigm that challenges what we think we know. And so the way that AI is being deployed is more in the accumulation of existing facts phase. And what it's doing is it's further increasing this kind of traffic jam where there are too many papers coming in. So that scientists are forced to gravitate toward the comfort of clinging to the ideas that are already popular, which is making it harder for this inherently sociological process, right, determining what is radical and nonetheless worth further experimenting so that it might one day become the new normal. That's not a job for AI. That is, at least for now, intrinsically a job for humans. And therefore, this avalanche of new greater productivity partly due to AI is actually jamming out that process. There's more to say on that, but that's the fundamental core concern. But speaking of research, you raise a point, which is very dear to me, which is the influence of control over the data on the entire ecosystem of research is the fact that today, most of the most valuable data are in the hands of private firms who, number one, don't share it too widely. And number two, pick and choose the people that they share with to basically shape a bit what kind of research is published. You claim that other fields do much better like medicine, I think is a bit too optimistic. But I think that this is a gigantic problem that we're all facing. And we don't have enough antibodies yet. Yeah, I mean, maybe let me speak to my field, computer science, so that I don't say anything about medicine that you might find too optimistic or inaccurate. So early on in the history of the field, the idea that computing technology could have negative impacts, that was not really something that people paid much attention to in the nascent areas of computing. So there wasn't really any necessity to have a culture of some or many computer scientists being independent of tech company interests and being of an external voice of research and accountability. So that is, I think, sorry to violate my promise of not talking about medicine, I think that is a way in which it's very different from medicine when I, on the few occasions where I go give a talk at medical conferences, you know, the seriousness with which they treat conflicts of interest and asking me about all of my commercial relationships with companies, they're of course worried about pharma companies. But I wish that in computer science, we had anywhere near that level of scrutiny of what are academic computer scientists relationship with these technology companies. And we don't. And it's just assumed that every computer science professor has ties with big technology companies and or has a startup on the side. That's quite unfortunate. Not everybody needs to, you know, take some kind of vow of purity and not have anything to do with tech companies. But I do think we need some subset of independent technology experts in academia who don't take money from and are otherwise intellectually independent of tech companies. We don't we don't have that today. And all of that contributes to the problem of tech companies owning all the data and making external researchers essentially so subservient because if they want to do good research, they're so dependent on access to not just data, but also computational power from technology companies. I think that's such an important point and Luigi has gotten me hooked on this idea too, because I also think it's one of the things that where the external impression is completely different from the reality of it and that most people on the outside would think that most academics are the arbiter, the unbiased arbiters of truth and would not understand all these subtle ways in which in which in which they are not. I just just because this is something that I've always been interested that I want to go back to this notion of what element of humanity or what it is in humans that that can make or that leads to a true a true leap of genius. It's kind of like the Kierkegaardian leap of faith, you know, right? If you think about Einstein or Newton, when people have come up with this entirely new way of visualizing the world, it's just been a radical departure. And sometimes the math that actually hasn't even been there to justify it. It's just a way of seeing the world that's almost philosophical. So is it that our current reliance on LLMs can never get there? Is it that we don't know whether they can or not because nobody fully understands why LLMs spit out the answers that they can't? And if there is a way to sort of capture human genius, do we need to fundamentally rethink AI? The simple answer is I don't know, but I can share some thoughts on that. One, I can say that whatever that spark is, I don't think current LLMs have that. I do want to clarify, I'm not saying that it's impossible. I do think it would require different approaches, different approaches in at least two ways. One is that if you want to increase this kind of creativity without just turning it into a random number generator, well, creativity while also retaining some kind of grounding, that would be a very different kind of product from a chatbot that's useful to everyday users. So you can't expect both of these from the same system because an LLM or any kind of AI model or system that had that level of creativity is just going to have too much randomness to rely upon for everyday use when someone just wants it to do some shopping or whatever. And I think scientists are notoriously poor at doing everyday mundane tasks and I don't know, maybe there is something deeper to it. A lot of creativity is just randomness, frankly, just exploring lots of random paths and that just makes them very poor assistance for everyday users. So for that reason, that's not necessarily going to come from the companies that are building and monetizing these chatbots on a scale of millions or billions of users. So maybe that has to come from a different kind of research effort. I know that there are some such research efforts out there, but the scale of it just completely pales compared to how much effort is being put into making these chatbots more engaging for everyday users. And that to some extent belies all of this talk about, we're building AGI because we want to cure cancer. If that's what you really wanted to do, you would be putting a lot more effort into making AI actually useful with regard to some of their scientific limitations as opposed to making them useful for everyday users. So you use a very apt objective, a subservient referring to academics with a VDIS big tech. I fear, and I think you mentioned that in your book as well, that this is not only of academics, but also of journalists. And this is particularly problematic because we're in a moment in which AI is hyped a lot. One of the functions of two journalists is actually to share the bad news, and Bethany has done her first share of thinking that. No, I think it's very important because what keeps the market from blowing completely crazy is in part, is the fact that you have negative news. If you all of a sudden put a muzzle on the negative news for a while, it will go crazy. And I fear that we are in that phase, that we are in a bubble after all. Every big technological innovation brought so much hype that there was a stock market bubble. It happens with railways, it happens with electricity, it happens with it.com. We are due for one here. So are we in the middle of the bubble? And why? I don't know. I don't know if we're in the middle of a bubble. Here's one possible way in which things might play out. I think it's quite possible that a lot of the hopes and expectations that have led to this level of investment in AI actually do correspond to real potential, but that these are not things that are going to be realizable on a two to three-year or whatever kind of time frame that's relevant to investors, especially considering that GPUs depreciate on a very rapid time scale. So one possibility is that we are in a bubble, that bubble bursts, but that over a period of the next couple of decades or so, we gradually do manage to productively deploy a lot of the applications that are leading to this moment of hype. And that would in many ways be very similar to the dot-com bubble. A lot of the excitement that led to the dot-com bubble related to things that people hoped that we would be able to do online, that it turned out was too premature in the 1990s, but are things that we take for granted today. So I do think that's possible, although I also think there are lots of differences from the dot-com bubble as well. There is one point we strongly agree, which is on section 230 of the Communication Decency Act. I've been said for years that algorithmic creation should not be shielded from liability. Now, I thought that this shift will require a law. And what I found very intriguing is that you argue that if properly interpreted, the current version of section 230 leads to this conclusion. Is that true? Can you explain that? I unfortunately can't talk about this on the record. Since writing the book, I have been retained as an expert witness in a lawsuit against most of the leading social media companies on this point. So I do want to be very careful about what I say on this in public. So I'm glad you're suing the social media company. So I think you're doing. So we are sympathetic to that. That's actually very exciting. Last question, if we're still at a time, the one point where we are not on the edge, you're clearly optimistic, is you don't think that the world will end in 2027. I'm referring to the AI 2027, who has a pretty scary view of the world. Why the scary view is wrong? Yeah, I mean, look, we can't be sure that the scary view is wrong. And I'm glad that they're doing the work that they do and are talking about the kinds of scenarios that they think more people should think about. And I do think researchers should be thinking about those kinds of scenarios. We think a lot about those kinds of scenarios. But I think it relies on a whole set of assumptions. It relies on AI developers and researchers being able to get to human-like AGI in a very short span of time. And we've been doing a lot of benchmarking of these AI agents. And we see very clearly that even though they can superficially solve a lot of tasks that are associated with human intelligence, the way they go about it is just so far from the flexibility, generality, adaptability, resilience of human intelligence. When you take them slightly out of the kinds of inputs they're used to, the performance of AI systems can catastrophically drop. We're seeing so many limitations. And we don't think this is just a matter of scaling or even one scientific breakthrough away. This is many scientific breakthroughs away. I do think just on the tech side, we have a lot of time. That's one point. That's not even the main point. Our main point is that it doesn't even matter so much of the capabilities of AI systems improved so rapidly. We have a lot of agency as individuals, as companies, as institutions, as policymakers, in how we choose to deploy these things. And so these risks are going to be realized when we deploy AI systems, not just when we develop them. And we have a whole essay called AI as Normal Technology, which is also our newsletter that we're developing into a book that really talks about how to exercise this agency. So it's not a matter of everything is definitely going to be okay, but it's a matter of here are things we can do collectively in order to ensure that things stay on a good path. So it's a prescription more than a prediction. If you're enjoying this podcast, there's another University of Chicago podcast network show to check out. It's called Nine Questions with Eric Oliver. Have you ever wondered who you are, but you don't know who to ask? Then join Professor Eric Oliver as he poses the nine most essential questions for knowing yourself to some of humanity's wisest and most interesting people. Listen to Nine Questions, part of the University of Chicago podcast network. A lot of his arguments rest on the idea that there are systems in place and structures in our place in place in our world that will stop us from having to confront these questions. And I think they rest on a second assumption that those systems that will make us have to confront those questions slowly are also good, that that friction for lack of a better word is good. Does that sound like an accurate summary? Yeah, absolutely. I think that he assumes that this is in place when, I think part of the crucial debate is from a societal point of view, what should we do? If you just assume that technology will be slow by definition, there's nothing to be done. It's kind of a positive, optimistic, less affair, assuming that the government works very well. And maybe because I'm pessimistic, I don't think that the government works very well, especially when there are powerful forces that want the government not to walk in that direction. It seems to completely ignore that there is a massive pressure of the Silicon Valley to get rid of all these frictions. It's funny, I've been subscribing to also to this AI newsletter, and I've been thinking increasingly that the ways in which AI is getting integrated into things, it's just being done in these ways where it's not as obvious as he thinks it's going to be. And so because it's not as obvious, it's sort of creeping in around the back door. And therefore, because it's creeping in through the back door, the systems won't be in place to prevent it. So I really want to believe in the simplicity of his viewpoint or the seeming simplicity of it, that deployment will be different than development. But I didn't come away convinced from the conversation that that was right. And if he says that at the end of the day, AI is such a quote unquote normal technology, then the acceleration that we fear or we desire might not be there. And so even the pressure to think hard about this societal tradeoff disappear. Yeah. And that's dangerous in and of itself, right? Normalizing AI might actually be worse than not normalizing it because it's so comforting to all of us to believe that it will be like electricity when it may not be, right? It's false. It's false. I think what you're getting at is that it's false comfort. Yeah, absolutely. But first of all, I want to be very clear, I did learn a lot from his writing, and it did change a bit my mind. I think that his notion that AI might be quote unquote a normal technology not so accelerating might be true. Judge GPT-5 was not such a leap forward from Judge GPT-4 as people expected. And so maybe this technology, at least the current system of LLM model is reaching the point of decreasing return to scale. And so yes, it's going to continue, but it's not going to continue at an accelerating pace as we've seen in the last few years. Well, so I want to believe that I think because I'm pro-human and because I do worry deeply about the impact of AI on our AI at speed or at accelerationist speed on our economy, because I don't think there's any way, given the current state of our society, that we'll have anything in place that will deal with it. And if it is what its proponents say, it will be like the China shock times, I don't know, 10,000 times a million. So my bias is to believe that, but I'm not really sure it's right. Did you see there was a new gauge published by OpenAI last week called GDPBAL. And it's basically evaluating leading AI models on real-world tasks that have been curated by experts from across 44 different professions. And my longtime friend and former colleague, Jeremy Kahn, who's writes for Fortune and is a great thinker about AI, wrote a piece about it that was eye-opening to me because his point is basically that some of these studies showing that AI workslop is a drag on productivity and that there aren't really any advances being made, are often not quite what they're cracked up to be. And so I'm just, I'm really not sure what to think. Even as we're all taking comfort, for example, maybe in his point of view, that LLMs can only do what a lawyer might be asked to do on the bar exam, but can't do what a lawyer does in real life, there's work being done that would suggest that it's already changing, even as we're looking at it. Within 15 months, the models went from being barely credible to producing work that's rated as comparable to lawyers, almost half the time. But it is still, this piece I'm reading now would circle back would actually end up supporting our guest's point of view and that it's focusing, GDP VAL is still focusing only on written deliverables. And lawyering is so much more than that, that negotiations, courtroom advocacy, client counseling, ethical decision making, they're all outside of this frame. So I think- Did you say ethical decision making? Yes, ethical decision making, I did say that. So I think it comes back to your, I think it comes back to what you said, we have to be really, every single thing has to be very grounded in the specifics and it's really hard to generalize. And so I think for maybe for any task, it's just really hard to generalize. How's that? Yeah, that sounds a bit like historians, they always say it depends. And with this, you don't have any conclusion. But I think that we can generalize one thing, that it is going to be disruptive for any kind of job. And the thing that is difficult to establish is for some job, the disruption will be mostly positive because you get an answer near productivity. And for other, there will be a lot of displacement. And so this is not a reason to block the benefit of AI, but it is a reason to think very hard how we can soften up the societal consequences that this technology changes would bring about. And particularly, I like very much his position vis-à-vis both the famous section 230 of the Communication Decency Act. He shares my views that you should make companies liable when they actively edit in the firm of promoting some posts versus others. Now, he claims that you can even use the existing law to make them liable. Unfortunately, he didn't want to talk about it because he's an expert witness in a case like this. But I think that that would be very, very useful. And also, he has a very thoughtful position within the time to discuss on copyright issues. And he basically say, we need to have the political system to intervene. The trade-off between the incentives to protect the producers of art and the incentives or the products or technology and incentives to distribute them to the largest number of people is very much driven by the existing technology and the existing technology has changed. And so we need politically to have this discussion. Unfortunately, we don't have basically either the two sides being ready to have this discussion in a calm way. And so I fear that we're going to try to limp along with outdated institutions. I think that that's to me is the greater frustration. But I did think also, though, separately that you raised an interesting question that there's a lot of debate about today, which is, is AI in a bubble? And I do think that is a separate question from what the transformative effect of AI will be, because I think it's possible that both things are true, that AI is in a huge bubble. And that the collapse of that could be sort of a first level wave of damage before we even get to the damage wrought by AI taking people's jobs. In other words, it's possible that there will be all sorts of levels of damage from a stock market wipeout that turns into an economic wipeout. And maybe before we even get to the real problems caused by our jobs going away. And maybe that's actually the most pessimistic take possible, is that yes, AI is in a bubble. Yes, the wipeout is going to be terrible. And yes, AI is also fundamentally incredibly transformative, and it's going to take all our jobs. I think it would be very painful. And I think we're going to have all the problems that are being created by Trump has been pushed under the rug because we had this big bubble driving the economy. And so we're going to have the bursting of the bubble with the normal consequences and the emergence of all these underlying problems. So that these double coincidence will be very painful. And the possibility is that this will start a lot of restructuring that eventually will be positive. But in the short term, I see A&H. Yeah, I think it goes back to the conversation we had about Japan, actually, which is that in many cases, bubbles are masking underlying weakness. And I worry that the current AI bubble is masking underlying weakness in our economy. And so when it bursts, that underlying weakness will be also be part of the problem. There are these incredible numbers from Michael Sembalist at JPMorgan. He wrote that since the release of chat GBT in 2022, AI related stocks have accounted for 75% of S&P 500 returns, 79% of earnings growth, and 90% of capital spending growth. So if all of that collapses and goes away, it's going to be extraordinarily painful on a scale that puts the dot com bubble to shame. A friend of mine actually reminded me recently, remember pets.com, how that was the celebrated collapse from the dot com era. So guess what pets.com peak market cap was? About $300 million. Tiny, tiny, tiny. So you contrast that to let's look what is NVIDIA's market cap right now. NVIDIA's market cap as of right now, as we're recording this, NVIDIA's market cap is 4.56 trillion. So the scale is just, it's just, it's entirely different. So therefore the damage will be a lot bigger. And I also think our economy is in worse shape. The AI bubble is hiding a plethora of Sims, including a, including a rising and unsustainable debt level. And so I think that it could be, it could be quite ugly. How's that for a Monday morning negativity? Yeah, I think it's very negative. But as, as always, being negative, sometimes you might be right. And this is what is interesting is if you want to think about the biggest change in, in technology was the introduction of mass production that Ford bought about really at became diffusing America during the period of the Great Depression. One of the issues that the Great Depression, and most people are not aware of is there were massive increase in productivity during the Great Depression. They didn't manifest in more people having more wealth, but I think that the, the from a technical point of view was an externally appeared of, of growth. So it is possible, as you said, that after the crash, following the AI bust, there would be a lot of restructuring, increasing tremendously the efficiency, but with a lot of people unemployed, I think that those two things are possible. And, and to be fair, I think moment of crisis are the easiest one to restructure. I, I always tell my students as an example of technology at the beginning of World War Two, the French and the British were defeated by the Germans because the Germans knew how to use the tanks in a much more efficient way in, in raging war than the, the breeds in the French. But there's nothing that makes you learn fast as a defeat, a military defeat. And the British and later the Americans learn how to use the tanks pretty fast. But without a defeat is hard to change. So unfortunately, economic downturns have this positive effect, but they're very painful. Capitalism is a podcast from the University of Chicago podcast network and the Stiegler Center in collaboration with the Chicago booth review. The show is produced by me, Matt Haudabbe and Leah C. Zerain with production assistance from Utsoth Gandhi, Matt Lucky, Sebastian Burka, Andy Shea and Brooke Fox. Don't forget to subscribe and leave a review wherever you get your podcasts. And if you'd like to take our conversation further, also check out promarket.org, a publication of the Stiegler Center and subscribe to our newsletter. Sign up at chicagobooth.edu slash Stiegler to discover exciting new content, events and