Hi, listeners. We're running a short survey to learn more about our audience so that we can continue to bring you a podcast you find helpful. If you have a moment, please take the survey at mitsmr.com slash podcast survey. You'll receive a complimentary download of MITSMR's executive guide, How to Manage the Value of Generative AI. Please take the survey this month at mitsmr.com slash podcast survey. We'll put that link in the show notes, and thank you for your help. Thank you. and regulation. We're back on March 10th with more new episodes. For now, we hope you enjoy this conversation. I am Dharana Samoglu, Institute Professor at MIT, and you are listening to Me, Myself, and AI. Welcome to Me, Myself, and AI, a podcast from MIT Sloan Management Review, exploring the future of artificial intelligence. I'm Sam Ransbotham, Professor of Analytics at Boston College. I've been researching data, analytics, and AI at MIT SMR since 2014, with research articles, annual industry reports, case studies, and now 12 seasons of podcast episodes. On each episode, corporate leaders, cutting-edge researchers, and AI policymakers join us to break down what separates AI hype from AI success. Hi, listeners. Thanks again to everyone for joining us. I'm excited to be talking with Duran Asamoglu, Professor of Economics at MIT. Duran works extensively on economic development, labor economics, and the economics of technology. In 2024, he was awarded the Nobel Prize in Economics for this work. His insights on the interplay between institutions, technology change, and inequality are particularly relevant for today's businesses. Of course, our listeners will be most interested in Duran's thoughts on AI. Duran, great to have you on the podcast. My pleasure. Thanks, Sam. Okay, so your work spans institutions, technology, and equality. Can you share some of the themes in general from your past research? I got into economics because I was fascinated by what I saw around me in my very young teen years about very divergent economic, political, and social outcomes across countries. huge disparities in terms of wealth, in terms of poverty. And those interests have framed my research and my focus on institutional factors which determine the effects of history, the effects of how society is organized, the rules, the laws, the norms, and technology as the prime channel via which human ingenuity and human decisions impact economic productivity and economic well-being. And throughout, I have been fascinated by the interplay between institutions and technology and by how institutional factors and technological factors have evolved over time. So a lot of my research has focused on, for example, why there has been a huge divergence in economic fortunes of different parts of the world since the 16th century or thereabouts. It is very much related to, for example, the fact that European powers colonize the rest of the world and shape the institutional trajectories of very different nations around the world in very diverse ways. And I've also been fascinated by the Industrial Revolution and how we started this process of using knowledge science, and various skills in improving the way that we can actually start producing goods and services. That's all really salient for what's going on right now. You have a recent book, Power and Progress. And I think I was reading the preface of a revised edition, perhaps, where you noted that things sort of changed on you underfoot. How has the recent changes changed, some of your thinking? Well, I think two things are worth noting there. The main thesis of power and progress is that technology does, to some extent, what we want it to do. It does not have a preordained destiny that will take us in one direction or another. We have a lot of agency, a lot of choice in shaping the future of technology. And different futures correspond on to different winners and losers, different benefits, different costs, different productivities. We try to make that point by going into history, showing how critical periods during our recent history, like the last 1,000 years, have led to sometimes big technological breakthroughs, but with huge losers. And sometimes those forces have been reversed and gains from technological betterment have been shared more equitably. So that message, I think, is more relevant today than ever. AI is a particularly versatile technology. It provides so many different futures for us. And the narrative that there is a determined, natural future of AI, and we are all going there whether we want it or not, and ultimately we're all going to become incredibly more prosperous out of that, is just simplistic. And fighting against that narrative, I think, is very important today because that narrative lulls us into a sense of helplessness and sense of complacence that could be quite costly. On the other hand, Of course, in 2021, 2022, when you were writing, it was impossible to foresee how rapid some of the advances in generative AI would be. But those advances haven't really changed the basic trade-offs and the basic messages that we wanted to convey in the book. I talked at the high level about different directions of AI. What are they? I think simplifying it, you have a couple of poles that are pulling in different directions. I would single out in the production process automation, which is the dream of most AI models today, especially under the banner of artificial general intelligence, AGI, which aims for large language models or other generative AI tools to reach levels of capabilities comparable to the best workers across a very wide range of domains. The reason why that is viewed as attractive is that just like previous rounds of software that improved cognition in different domains that can then be used for automating tasks So AGI is very tightly interwoven with the automation agenda. Automation is great. It gets rid of some routine tasks. It gets rid of some boring tasks. when it's applied in the physical domain, such as with cranes or robots. It could remove the most dangerous tasks from the human work schedule. But automation also doesn't benefit workers by itself. It takes away tasks from workers. It is beneficial to capital and capital owners and not so much for workers in general. So at the other pole, we have things that are complementary to humans, meaning that technology enables humans to do more things or better things or completely new things. So then these new things is what I refer to as new tasks. So if you look at people around you, many of the occupations you'll see involve things that could not even be imagined 50 or 60 years ago. As a journalist, you're going to be making videocasts and podcasts and use technologies for research that require completely different skills than somebody 60 years ago going to the library and sifting through books. So those are some aspects of new tasks. So are many of the physical occupations in manufacturing that involve much more technical work. Those have generally been very good for productivity and for worker wages and employment. So that's one dimension in which the future of technology could have very different effects depending on whether we go on the automation or the new task direction. I would also like to add whether we use technology for information centralization or decentralization is also important. in that many of the early hopes about computers were centered on decentralization. People could do in their garages things that IBM as a centralized organization couldn't do. And personal computers enabled that to some extent, not anywhere comparable to the hopes of pioneers of computing in the 60s and the 70s. But today we are going in the opposite direction. Large language models are information centralization tools. They collect all of the information. They aim to collect all of the information of humanity ultimately and then centralize that and process it in a centralized manner that then gives you answers. And then so there's less for the decentralized human mind and human participation to do. The centralization and automation are two different poles, but they are complementary. So when I'm talking about new tasks, it is really about enabling the technology to go in a direction that can really help workers, help individuals, not just big corporations. So it's going back to those aspirations that were already present in the late 1960s and 1970s. And my work shows how new tasks, when they have been activated, have led to productivity gains and have led to wage gains and employment gains. So if we think about these new tasks, though, what kinds of things should businesses be looking for? If people buy this argument and they want to go down this path, what do they need to do? Actually, my thing is disarmingly simple. AI is really an information technology, a very powerful information technology. It's not an automation technology. AI is not thinking anywhere like the human brain. Instead, it has some truly impressive capabilities that the human brain doesn't have, and it lacks some of the judgmental and creativity-related capabilities that the human brain naturally has. As an information technology, what AI is very good at is sifting through gargantuan data sets and find relevant context and information for some specific task or specific context or specific application. So if you're an electrician and you encounter an equipment that is behaving in a way that you haven't seen before or a completely new equipment that you don't have experience with, and if you have the right AI tool, that can immediately and reliably give you information about why that sort of unexpected behavior is occurring or what are the things you need to know about this equipment and how it interacts with the equipment. particular type of electricity grid or the environment that it is situated in. Those are the kinds of things that regular electricians would have to work decades to get the experience in an imperfect way. So we can significantly improve what electricians, what nurses, what educators, what journalists, what academics could do using AI in order to perform more sophisticated tasks or new tasks and acquire much better information. And I think while AI, generative AI, together with the right sort of scaffolding from good old-fashioned AI that does pattern recognition, could provide that kind of ideal tool for human new tasks, that's not the direction in which AI is being developed. In fact, none of the big companies are pouring even a small fraction of their investment into developing AI as a pro-human, pro-worker tool. Well, let's connect these last two points a little bit. These are, as you say, being developed by big companies. When I think about the electrician in your scenario, wouldn't they naturally get recommended solutions that come from, let's say, advertising models that are built into the large language model? Right. Right now, today, as an electrician, you can take Pichat GPT with you, and you can ask questions. But there are several problems with that. First of all, it has not been designed or optimized for that task. Second, it's not reliable. So a much higher degree of reliability is necessary. Third, it has not been trained on the domain-specific information, all of the relevant electrical equipment and deep understanding of the electrical laws and electronics that would be necessary. And most importantly, it has also not been trained on use cases of best electricians dealing with similar problems from which AI could learn. So it is not designed for that task and it hasn't been trained with high quality domain specific data. And all of those restrict your ability to use ChatGPT or similar tools. And that the reason why whenever employers are given a push towards using them the first thing they want to do is just use them for automation because that just seems to be the path of least resistance To think about that a little bit more, there's nothing that says that we couldn't train those models over those domain-specific knowledge bases. Maybe we're just early days and that could come out. And I think that's plausible, but I'm not sure of the economic incentives for people to do that. The economic incentives are not there because this is not the business model of the leading corporations. That data doesn't exist and it won't exist unless we have property rights in data and we have proper data markets. The current architecture of large language models may create hard limits on reliability, whereas in situations like this, reliability could be a very important constraint. So, for example, imagine we do this with nurses and one in a thousand times they give give you the complete opposite of what they should do, and you poison the patient. I think one in a thousand seems very small, but actually in medical applications, that will be an unacceptably large casualty rate. So it's really a different architecture and different sort of preparation training of these models that may be necessary. Yeah, I think your error rate's an interesting one because I'm not really sure what I think about that. Half the nursing students graduated in the bottom half of their class. That's just how averages work. But as a result, we don't allow nurses to make those decisions at the moment. So except in a few cases where you have highly trained licensed practitioner nurses, nurses cannot prescribe drugs. They cannot make emergency decisions. When a patient is having problems, they have to wait for a physician to come. So that's the margin that we're talking about and nurse complementary technology would expand what nurses do in those domains. And no, you couldn't do that unless all of the nurses become even better trained and licensed practitioner nurses or the AI models get much better. Let's push on the nursing example a little bit more. My daughter has recently learned how to drive. I mean, I'll make you nervous. I think we both live in the same area. So, no, she's a good driver though. But she hasn't seen millions of almost wrecks yet. And I would love for her to have that experience. By analogy, the nurses may not have seen these esoteric cases in a way that we were just talking about these AI models are fabulous at storing lots and lots of information and recalling that. I think there are many things that can be done. The future of technology is rich. If you integrate AI with virtual reality, You can have personalized experiences where, you know, your daughter could experience very dangerous situations sitting in front of a computer. And I can tell you from my own experience, you know, when you get behind a wheel, you think you know and you don't. We talked a little bit about incentives. Let's talk about measurement a little bit. I think the issues we've always had is we can measure the number of widgets. It's we have a lot of trouble measuring the outputs of our knowledge economy. How important is measurement? And is there anything that we can do to try to improve that? Well, I think measurement is very important. And there are some puzzles that we should bear in mind. And I think these puzzles do feed into my concerns and also skepticism about some of the claims. We definitely do live in an age of innovation according to many measures. If you look at the number of patents at the USPTO, they have quadrupled over the last 40 years. We get an incredible array of new apps every day on our phones. We have much faster turnover of electronics in quite a significant way. I mean, when I use my iPhone that's a couple of years old, everybody says, wow, you're really missing out. And, you know, when people were using rotary phones, dial phones, you know, you could use the same model for 30 years and nobody would bat an eye. So there is a sense in which we are getting a lot of innovations. But using the standard measures of economists, we don't see much improvement in productivity. In fact, we're having slower productivity improvements today than we did in the 50s, 60s, 70s, those boring pre-digital days. What's up with that? Well, the people from Silicon Valley and economists who are sympathetic to that perspective would say that's all measurement problem. You're just not making allowance for how high quality some of the products you're getting now is. And the Bureau of Labor Statistics is overestimating inflation. You have in the middle of your palm a super powerful machine that allows you to access information like never possible before. So all of these things, they think, are the reasons why you shouldn't look at macroeconomic data. You should ignore all of the economists' core data sources. There is some truth to that, but I think it can be exaggerated. We did not measure the benefits from antibiotics that well either. But you still got amazing improvements on many directions in terms of GDP, in terms of output of the pharmaceutical sector, and life saved. Life expectancy increased tremendously with antibiotics. Well, life expectancy is not increasing. We're not seeing any of the AI-facilitated pharmaceuticals do anything yet. So perhaps time will change that. But we just don't have objective measures that show huge gains from AI as of now. I don't think that's just a measurement problem, but measurement can help understand where the bottlenecks are and also improve perhaps certain assessments of what the impact of AI is in different sectors. But I think a lot of it, again, comes down to what I was talking about. So if you overdo automation, if you overdo information centralization, you're not actually going to get all that promised productivity boom. So, you know, I'm bought in. What do we need to do here? I mean, as an individual, what does an individual need to do? Given that I can shake my tiny fist against the FANG companies, what should individuals be doing here? I think a lot. At the end of the day, society consists of individuals. If a lot of individuals change their mind, that has an effect. part of the reason why tech companies have so much power is because they have what Simon Johnson and I called in Power and Progress, persuasion power. They have persuaded the rest of society that their intentions are benign, their technology is good, and they will not misuse it too much. There's a lot of counter evidence to that, but we still sort of believe it. We still believe the leading AI companies when they say, we have this amazing godlike technology, believe that it's godlike, and that it will be used just in your service, your own personal God. So, you know, absolute power corrupts absolutely. I don't know that we should really believe those claims. So different individuals will have to reach their own conclusions but enough individuals a critical mass of them changing their views would have an effect through the democratic process And you know who are individuals who have a lot of say Hundreds of thousands of people perhaps more who work as engineers and scientists in these corporations They determine the direction of research. If they decided next year that they want to work not on automation and AGI, but developing more pro-worker, pro-human technologies that will help workers and human decision makers and decentralization, that's what we would get. So that's an individual decision. Another individual decision is entrepreneurs. You know, a lot of new ideas come from startups. Right now, startups are aligned with the big companies because their dream is to be bought up by the big companies. That's the way you become a billionaire right now. Well, again, that's a choice. Different values, different priorities, different regulatory systems. Perhaps we should really be much more vigilant in mergers and acquisitions. Then that could lead to very different dynamics. Good. I hope our students are listening because I do think that most of our students are out there trying to come up with startups with their goal of being an acquisition by one of these large companies. If I wanted to be rich, that's what I would do too. Yeah. So maybe our measurement problem extends both to our productivity as well as to our incentives, if that's how we're measuring success. Yeah. You mentioned regulation. Let's touch on that a minute. I tend to think, well, we can have a market forces perhaps do a better job of aligning incentives than regulation. What can regulation do here, particularly when we are dealing with goods that are not physical goods? Well, I would like to say three things about regulation. First of all, regulation is always tricky. and look at Europe, and Europe is so far behind in AI and many areas of tech because they've not been very conducive to innovation via their regulatory system. Too many organizations, too much interference. That can be very bad, so you have to balance things. Second, some regulation on health-critical, information critical, democracy critical things is absolutely necessary. You cannot let AI models pretend to be doctors without having some sort of assessment that they are actually giving adequate information. We apply tremendous barriers to anybody becoming a quack doctor. While we should apply the similar standards to AI models. But most importantly, we may need a change in the philosophy of regulation. Regulation should be not a reactive thing where, oh, we try to stop whatever AI companies are trying to do. I think we need proactive regulation that helps the AI industry move in a more socially beneficial direction. And that starts by recognizing what the socially beneficial direction is. And I've argued it's pro-worker, new tasks, more decentralization. It then recognizes why the current playing field is tilted against it and tries in a soft way without, you know, stopping or killing the market process, trying to correct those distortions and provide a living chance to the alternative directions. That's a moment of hope there. That's good. Let me switch a little bit. Our show is Me, Myself, and AI. Let's let people get to know you a little bit. How did you get interested in these things? I've been always interested in technology as the engine of the industrial revolution of the rapid growth process. And that brought me, together with my studies of labor markets, to focus on automation. So I've been working on automation for over 20 years. And then when AI models started making rapid advances in the mid-2010s, I got worried about what that would imply from this aspect of the future of work, what it would imply for wages and employment. And that made me invest more time and resources into AI and understanding AI, understanding its societal implications, but also understanding the technology. And I think it's fascinating. It's super promising. but also super scary. I think that's a nice way to wrap up that balance that you keep going back to. One of the things we like to do in the show is ask you a bunch of rapid fire questions. You know, top of your mind. What did you want to be when you grew up? When you were a kid, what did you want to be when you first were thinking about a career? I wanted to become a social scientist. Ah, so, well, okay, that worked out for you then. What's the biggest misconception that people have about artificial intelligence? That it will somehow completely replace humans. I think at the end, AI will be something that works alongside humans. And the better we understand that and how to achieve that, the better we will be in shaping the future of work and the future of humanity. How do you personally use these tools? I use it just like other people. I sometimes ask questions to chat GPT. And, you know, most of the time I am both surprised by how good it is and disappointed that, you know, if I really trusted everything I got from it, I wouldn't be doing so well. Yeah, I have to push back a little bit there. I find that if I know something about a subject and I ask a question, I'm disappointed in the results. And if I don't know much about the subject, then I'm impressed with the results. That's it. That ought to worry me because... Even when I know about the subject, I am impressed by, you know, how good it is able to synthesize the basic knowledge there. But it's just, it always pretends to know more and gives answers that are really incorrect because it's extrapolating too much. What has moved faster than you expected with artificial intelligence? Oh, the large language models. I mean, their capabilities, their reasoning capabilities are truly impressive. All right. It's been great talking to you. This has been a fascinating conversation. I love your balance of both optimism and concern. And I think that's a nice way to wrap up this session. Thanks for taking the time to talk with us. Thank you, Sam. This was a lot of fun. Thanks for listening. Me, Myself and AI season 13 premieres on March 10th. Please join us. Thanks for listening to Me, Myself, and AI. Our show is able to continue in large part due to listener support. Your streams and downloads make a big difference. If you have a moment, please consider leaving us an Apple Podcasts review or a rating on Spotify. And share our show with others you think might find it interesting and helpful. Hi, listeners. We're running a short survey to learn more about our audience so that we can continue to bring you a podcast you find helpful. If you have a moment, please take the survey at mitsmr.com slash podcast survey. You'll receive a complimentary download of MIT SMR's executive guide, How to Manage the Value of Generative AI. Please take the survey this month at mitsmr.com slash podcast survey. We'll put that link in the show notes, and thank you for your help.