Me, Myself, and AI

Science, Innovation, and Economic Growth: OpenAI’s Ronnie Chatterji

30 min
Dec 8, 20254 months ago
Listen to Episode
Summary

Ronnie Chatterji, Chief Economist at OpenAI, discusses how AI tools like ChatGPT can accelerate scientific innovation and economic growth by helping researchers explore interdisciplinary combinations and run experiments more efficiently. The conversation covers AI's current consumer use cases, the economics of generative AI adoption, and the importance of maintaining critical thinking while leveraging AI as a complementary tool.

Insights
  • AI adoption has been unprecedented—ChatGPT reached 100M users in 2 months, faster than any consumer product—but the real economic value will come from scientific innovation and discovery, not just productivity gains
  • AI acts as a complement to deep expertise rather than a replacement; senior professionals with domain knowledge can leverage AI more effectively than early-career workers, creating potential job market inequities
  • Current consumer usage of ChatGPT centers on three main use cases: information seeking, practical decision guidance, and writing assistance (editing/summarizing), not wholesale outsourcing of thinking
  • The biggest risk isn't malicious use but rather outsourcing critical thinking; maintaining intellectual rigor while using AI tools is essential for individuals and organizations
  • Scaling laws and model capabilities are advancing faster than anticipated, but extrapolating these curves is dangerous; real economic impact depends on vertical-specific adaptation and organizational adoption
Trends
AI as decision assistant gaining traction in enterprise; organizations need specialized, vertically-adapted models rather than general-purpose solutionsGrowing focus on AI safety and risk mitigation across multiple dimensions (national security, mental health, democratic participation) within AI labsShift from productivity-focused AI narratives to innovation-focused narratives; emphasis on AI's role in accelerating R&D and scientific discoveryWidening expertise gap: AI tools amplify advantages for experienced professionals while potentially disadvantaging early-career workers lacking domain knowledgeConsumer AI adoption plateauing in novelty phase; next growth phase depends on monetization models (subscriptions, enterprise licensing, commerce integration)Interdisciplinary collaboration emerging as key innovation driver; AI tools positioned to bridge knowledge silos and reveal novel field combinationsRegulatory and institutional alignment becoming critical bottleneck; technical capability alone insufficient without sector-specific compliance and adaptationReal-world usage data diverging from hype narratives; research-backed analysis of actual user behavior contradicting binary 'AI good/bad' discourse
Topics
AI-Driven Scientific Innovation and R&D AccelerationGenerative AI Economics and Business ModelsChatGPT Consumer Adoption and Use CasesAI Safety and Risk MitigationInterdisciplinary Collaboration and Knowledge IntegrationAI as Complementary Tool vs. Replacement TechnologyScaling Laws and Model Capability ExtrapolationVertical-Specific AI Adaptation and Enterprise ImplementationCritical Thinking and Intellectual Rigor in AI-Assisted WorkJob Market Impact and Workforce Skill RequirementsAI Transparency and Responsible DisclosureTacit Knowledge and AI-Mediated CollaborationConsumer Surplus from AI ToolsGeneralist vs. Specialist Career Paths in AI EraGovernment Investment in AI Infrastructure and Semiconductors
Companies
OpenAI
Primary subject; Ronnie Chatterji is Chief Economist; company behind ChatGPT with 800M weekly active users
MIT Sloan Management Review
Podcast host organization; Sam Ransbotham is professor of analytics and has researched AI/analytics since 2014
Boston College
Sam Ransbotham's current academic affiliation as professor of analytics
Duke University
Ronnie Chatterji is faculty member at Fuqua School of Business; researched technology diffusion and innovation
Harvard Business School
Mentioned as institution researching economics of science alongside MIT Sloan and Duke
Moderna
Referenced as guest on previous podcast episodes discussing AI applications in pharmaceutical R&D
Pirelli
Referenced as guest on previous podcast episodes discussing AI applications in tire manufacturing
Wendy's
Upcoming guest episode; using AI to enhance drive-through experience
People
Ronnie Chatterji
Chief Economist at OpenAI; expert on innovation economics, technology diffusion, and AI policy
Sam Ransbotham
Podcast host; professor of analytics at Boston College; AI/data research lead at MIT SMR since 2014
Everett Rogers
Diffusion of innovation theory scholar; framework referenced for understanding ChatGPT adoption patterns
Eric Brynolfsson
Economist whose work on AI capability improvement (4% to 72%) cited regarding extrapolation challenges
Quotes
"Science is sort of the ingredient to innovation and innovation drives economic growth."
Ronnie Chatterji~18:00
"AI can help us both brainstorm which combinations might be most useful and help us run through some of those combinations and figure out which ones are most fruitful."
Ronnie Chatterji~25:00
"The worst possible use that I can think of is outsourcing your critical thinking."
Ronnie Chatterji~55:00
"AI is a real complement for people with deep expertise. The question we have to answer is how do folks who are just starting off in their careers get that expertise if the job market starts to change."
Ronnie Chatterji~48:00
"The real news gets made when we come up with new insights from analyzing the real usage data and the way people are using AI."
Ronnie Chatterji~62:00
Full Transcript
Hi, listeners. We're running a short survey to learn more about our audience so that we can continue to bring you a podcast you find helpful. If you have a moment, please take the survey at mitsmr.com slash podcast survey. You'll receive a complimentary download of MITSMR's executive guide, How to Manage the Value of Generative AI. Please take the survey this month at mitsmr.com slash podcast survey. We'll put that link in the show notes, and thank you for your help. How will AI enable the future of interdisciplinary collaboration? Find out on today's episode. I'm Rodney Chatterjee, Chief Economist of OpenAI, and you're listening to Me, Myself, and AI. Welcome to Me, Myself, and AI, a podcast from MIT Sloan Management Review, exploring the future of artificial intelligence. I'm Sam Ransbotham, professor of analytics at Boston College. I've been researching data, analytics, and AI at MIT SMR since 2014, with research articles, annual industry reports, case studies, and now 12 seasons of podcast episodes. On each episode, corporate leaders, cutting-edge researchers, and AI policymakers join us to break down what separates AI hype from AI success. Hey, listeners. Thanks for joining us. I think everyone listening by now knows that I tend to think all of our episodes are exciting, but today I especially do. Our guest today is Ronnie Chatterjee, Chief Economist at OpenAI. Ronnie, great to have you on the podcast. Sam, thanks for having me. And I think this will be an exciting one. I don't know if I can beat the standard of all your other episodes. We're going to do our best. Okay, a lot of pressure. So I read a stat that OpenAI has about 800 million weekly active users as of October. I think August it was 700 million, March 500 million, 600 billion tokens a minute. You know, you throw 100 million here, 100 million there, it starts to add up after a while. I suspect then that most of our listeners are pretty familiar with OpenAI. But can you give us a quick introduction to the company and what it does? I can tell you the first time I ever heard of OpenAI was in late 2022 when I was working in government and a bunch of folks were reciting some poems that sound almost too catchy, too perfect to be written by a human. But there they were reading them and they were really funny. You know, sometimes poetry aligned around a particular individual or funny habits someone had. And when I came back, they said, hey, it's this thing called ChatGPT. And so most people will know OpenAI through the ChatGPT product. But of course, you know that ChatGPT was sort of an accidental consumer product because the team at OpenAI had been working on an API for developers to build on top of AI models before that. But when it came time to release this consumer product, they weren't really sure how to be received. and the quick adoption, almost 100 million users in just two months, really was unprecedented, larger than any other consumer product that I'm familiar with, and really set the stage for the rapid pace of generative AI adoption we've seen over the last few years. Yeah, that's been huge, the ease of innovation. Ronnie is also a professor at Duke University. He knows diffusion of innovation theory. Putting that chat interface out there really clicks all the relative advantage, complexity, trialability, observability, compatibility, all the things that we know from Everett Rogers. But practically, you know, before, like you say, that fall of 2022, when I talked to people, they knew about AI, but they hadn't used it. Now, practically everybody had used it. You've got a lot of people using it. And I think that free access got a ton of users, but it requires massive amounts of money to keep these things going. How is the economics going to start playing out with OpenAI. For context, you've just had the buy now button integrated and starting to integrate apps. How are the future economics going to play out here? Well, because the adoption was so unprecedented, I think now we're trying to figure out where is AI going to actually deliver value, both in the enterprise, which we can get to later, but also on the consumer side. And there's been so many other products released. If you think about Sora, OpenAI's recent video product, And so when you think about the economics of AI, we started at the very beginning when I got the job. I mean, they said, look, we need an economist to make sense of the economics of generative artificial intelligence. And I was a good choice, I think, because I'd worked a lot on these issues, as you mentioned, in my academic role at Duke University. I learned a lot from scholars around the world around how technology diffused across organizations. And so that was a big part of my research and my dissertation. At the same time, I'd worked in government and thought a lot about technology policy and how government investments in critical infrastructure like semiconductors could be really important for national competitiveness. And finally, I was just really interested in business. And this gets to this point you're making here about the commercializability of AI. I taught in a business school for my entire career. I'm really interested in how my MBA students are going out in the world and going to make a career for themselves. And so many of them are going to be working in this AI space in some form or fashion. So the question is, what's the business model going to be? And I think you're starting to see some really interesting things emerge. Open AI's products like ChatGPT have big subscription bases. So some people are already paying to use ChatGPT or the API. And that's obviously an important source of revenue. Enterprises are increasingly signed up. A large percentage of enterprises around the world, and this just is in the US, this is globally, are signing up for some sort of artificial intelligence product. And a lot of it is ChatGPT enterprise. And so that's another model by which we'll generate revenue. But going forward, I mean, the sky is really the limit in terms of thinking about the intersection between business and AI. The thing you mentioned around helping consumers shop is really interesting to me. I bought a pair of jeans, actually, that just arrived yesterday using ChatGPT. And why did it work for me? Because, you know, I didn't like to go shopping. I asked Chat. I said, look, these are the jeans I've liked in the past. Here's the colors I'm missing. And, you know, I'm at a certain age where I want to make sure I look cool, but not too cool. Right. If you know what I mean, Sam. Right. This is really, really important for someone like me working inside a tech organization. Like I can't be like I'm trying too hard. And ChatGPT got all that, actually, and recommended two pairs of jeans that I purchased. And when they arrived yesterday, I've been pretty happy. And I can see that facilitating shopping for someone like me who really doesn't like to go shopping being really a tremendous value for a lot of consumers and create a lot of cool business models around it. And that's just one example. So I don't want to denigrate productivity. I think we're going to have a lot of productivity. But what about something bigger? What about science I think you talked about science as an analogy was you can probably correct me here The analogy was walking down a corridor with a whole bunch of closed doors You don know what behind the doors Maybe these tools can help us peek behind the doors. What's the option? Yeah, when I was in graduate school, there was a whole bunch of folks studying the economics of science and technology. And for a little bit at the beginning of my career, I was in a business school and I was talking to people at MIT Sloan and Harvard Business School and Duke's Fuqua School of Business, where I'm a faculty member now. And they were all interested in the economics of science. And I said, well, hey, you know, why is science so important? Why should economists care so much about science? And in the courses that I took and the professors that I worked with, I began to understand that, you know, science is sort of the ingredient to innovation and innovation drives economic growth. And so I started to think much more deeply about the research and development process, the role of companies and universities and governments in funding scientific research. And when you think about a new tool like AI and its potential to accelerate scientific research and innovation, then you're talking about a tremendous game changer for the economy. And yeah, when I think about science, I think about a scientist deciding where to spend her career. I'm a social scientist, but some of us have to make similar decisions. What do you want to work on? For me, it was entrepreneurship and innovation. But for that scientist facing that endless corridor with doors on either side, she has to decide what she's going to spend her life on. What lab is she going to join? What skills is she going to acquire? And you make those choices early in your life. And a couple postdocs later, it's kind of hard to switch. And I feel like what AI can do for the scientific community is try to look behind some of those doors, figure out where there might be more potential, help run quicker experiments, more revelatory experiments. and help that scientist who's early in her career figure out what door she wants to spend her life working behind and then unleash innovations like we've never seen. That's where I think the real promise is for innovation and economic growth. I want to pull on a couple of things there. Let's push a little deeper. What's it mean to look behind a door? That's a great analogy, but how does that play out to take me from looking behind a door as an analogy to actually sitting down in front of a computer and typing something into OpenAI? Yeah, let's talk about that. I can take this analogy a little further and maybe your listeners will go with me. Maybe they won't. But suppose she opens up the door, the first door on the left. Behind there, there's a stack of literature, scientific papers and books and things have been written about that particular area. And then there's a bunch of what you might call jigsaw puzzle pieces over there, right? Some of them have been put together in little pieces and they're looking like the left corner of the puzzle is complete. But there's other parts of the puzzle that aren't complete at all. And she's trying to figure out which pieces to fit into that puzzle to make it whole and get a sense of what's going on in this world. And it could be that this door leads into another door, which is connecting multiple fields. Often we see innovation at the intersection between different fields like biology and chemistry, and of course, even deeper than that. And so one of the challenges is how do you know which pieces to put together? How do you know which novel combinations are actually going to be useful? And of course, just combining them isn't always enough. You have to put them through a lot of different tests to figure out whether they're durable, without giving you scientific insight. And so I sort of feel like AI can help us both brainstorm which combinations might be most useful and two, help us run through some of those combinations and figure out which ones are most fruitful. That's really the promise, I think, of AI in science. Folks working in those fields are finding really important applications in all parts of the scientific process. But for me, I think about it in terms of trying to brainstorm new ideas, make new combinations, make testing and experimentation more efficient. And then when you do find things that work, scaling them faster. That's where I really think AI can make a big difference. But it's going to be folks in science applying this and finding these things. And I'm sure we'll unlock lots of really interesting process innovations over the next several years. You know, that's come up with several of our guests. We had Moderna on. We had Pirelli making tires. And one of the nice things is these tools have such amazing memories that I think for me, when I go look for something, actually, I'm going to make fun of Ronnie here for a second. we were just looking for some headphones and, you know, you see them pulling out the same drawer two or three times looking for some headphones. And the machines don't have to do that. They remember where they've looked. And just that combinatoric search can really explode and narrowing down what's a good place to search and, hey, what have already looked here. That's big. I agree. And, you know, this is why I should have asked Chachi BT where my headphones were, right? This would have been better. But I agree with you, right? Think about a world where you and I are working on a project together and we're working with AI as our co-investigator, our co-worker, and there's a shared memory between us, right, about what you've worked on, what I've worked on. As we talked about earlier, so much of the interesting scientific discoveries are the intersections of our knowledge bases. And AI can play a role in intermediating between those. A lot of human collaboration is hindered by what they call tacit knowledge. You know, I know more than I can tell. And if you and I are working together in a situation where AI can help to reveal some of that task and knowledge, man, maybe it'll unlock new discoveries. That's super interesting for me. We're getting a little philosophical here, but you're talking about the intersections between different sciences and disciplines. What's our advice to people? Are these tools going to be so awesome at going deep that we should be generalist? Are these tools going to be so awesome at being generalist that we need to go deep to differentiate? Those are two big different doors on different sides of the corridor. Massive questions and goes back to people's debates over, should I become a generalist or a specialist and T-shaped leaders and all these different questions we've been thinking about in business schools for a long time. What I'll say first is we have to have humility about this. I wish I knew the answer to your question with 100% certainty. If I did, I would share it with my children right away because they're asking the same questions or they will when they get a bit older. We do live on a really uncertain time. I'm very sympathetic to young folks who are trying to figure out which direction to go. I do think, though, going deep, even if it doesn't end up that you are going into that field and doing that exact work is really useful because of the discipline of going deep. When you ask an audience to raise their hands if they studied engineering undergrad, when you go to a place like Duke or MIT, you'll see a big number of hands raise up. But when you ask people, keep your hand up if you still work as an engineer, right, then a lot of those hands go down. Does that mean that the engineering degree, the undergrad degree was not worth it? No, those folks will raise their hands right back up and say, no, I use it as a product manager. I use it as a consultant. I use it in banking. I use it in my everyday life. And so I do think the idea that you have to choose a direction that has to be perfectly aligned with the job that's out there, that probably wasn't true even 10 years ago. And it's definitely going to be true going forward. I wish I had the perfect advice to navigate the exact career that everyone should do. It's very, very difficult. But I do think going deep on something and following your interests, that's usually a good way to start. And then you can figure out how to morph that into the career opportunities that are available to you. I know that sounds kind of mushy, but I actually don't think there's a very direct path that I can advise anyone other than that Yeah well okay And then just for the record I did not slip him a 20 to say that it okay to be a reformed engineer I think as listeners know I a chemical engineer that no longer does any chemical engineering And I didn also slip him a 20 to talk about the benefits of going deep because I feel like the ability to consume information from these models depends on having depth that otherwise you're blindly trusting. That's right. And I've been thinking about this a lot with the early career job market. I think one of the interesting things for people who are more senior in their careers. You'd think that ironically, you know, senior people would be less likely to use AI tools and then they'd be less advantaged in the job market. If you think about AI tools as a complement though to expertise and experience and deep learning, you'd actually expect it to help those older workers better. And, you know, some of the patterns we'll be worried about in the job market may be about that, where AI is a real complement for people with deep expertise. The question we have to answer is how do folks who are just starting off in their careers? How do they get that expertise if the job market starts to change? And that's something as an economist, I think a lot about. For me, when I'm reading about economics using chat GPT, I know the papers, I know the literatures, I can tell when something isn't exactly right, or if there's a hallucinated citation. But for a younger economist, they might not have that. And so it is maybe a stronger compliment to me doing a literature review than it would be to a person just starting off. And I think that's really important dimension of AI That's not what was talked about. Yeah. So let's come back. One of the things I also wanted to push on a little bit is you alluded to the idea that one of these doors will open, we'll have a huge, big find behind it. I think it's super plausible that we'll have productivity benefits. I'm faster at reading something or summarizing something. So it's maybe hard to quantify, but I know I'm 100% there. What's the likelihood we're going to open a door and find a new electricity or a new nuclear power or, you know, what's the, I don't know what the next GPT is or general purpose technology out there that we may find. Is it possible that these things are going to help us do that? I think it's possible. I think what you're seeing now is the pathway to the kind of productivity increases that you've already bought into. Like if you look at the paper that we just wrote, how people use chat GPT, you see a path, how people are going to use this to improve their writing, ask it for help to make decisions more efficient and streamlined. And a lot of us are already doing that in our personal life and at work. And so you see a path to saving us time, saving us money, resulting in a lot of consumer surplus, as economists call it, coming from AI tools. The next piece, though, and this is, I think, almost a requirement if you're going to see the transformative economic growth that many are predicting, is through innovation and the scientific process. And I think that is where the questions you've asked, like how will AI be able to do that are going to become really important? I do think it's possible. The reason I think it's possible is because the capabilities of AI are moving really, really fast. I think as an economist, the one thing I didn't realize before I joined OpenAI a year ago is how quickly the capabilities are evolving and moving. And when you look at our performance on the International Math Olympiad or some of the recent evaluations related to economically valuable tasks, the GDP value, as it's called, you see quite a steep curve of improvement on these tasks. And I think as an outsider, I was sort of generally knowledgeable that things were moving in the right direction. but not really seeing the shape of the curve. And when you look at that and you start to think, what would that look like if that rate of growth continued? Then you start to think about AI being able to do really amazing things. So I do feel like that's a real possibility. Hard to put a percentage on it, to be honest. And I think we got a plan for either scenario, right? That it unlocks these great secrets and innovations or that it doesn't. Yeah, it's talked about, you alluded right there to the extrapolation problem. You know, it's really tempting to look at these numbers and draw a line between, oh, the Olympiad a year ago, the Olympiad now. You may refer to some of Eric Brynolfsson's work recently talking about the, I think, 4% to 72% with some of the numbers. It's really hard not to draw a line between 4% and 72% and keep going. But you don't get over 100% unless you're a football coach and then, you know, all percentages are thrown out. How do we think about extrapolating these things? I mean, one thing is we do the easy stuff first, so we're going to get the biggest gains first. So that points to an idea of diminishing returns. You also alluded to an idea of combinatorics, where you might get super linear returns because of benefits in two areas coming together. So how do we extrapolate? I think early on, we saw the capabilities of AI evolving according to the so-called scaling laws, which is really interesting, right? the more compute, more powerful chips, the more data that you have, that the models could become continuously more powerful as you added more of that. And then, of course, there was a reckoning about whether scaling laws applied. And then we saw reasoning models and other innovations that kind of pushed it forward and said, look, okay, now there's a new set of scaling laws, which is if the model takes longer time to think, it's going to be able to solve these problems better. And so we don't know exactly how quickly these new innovations and micro innovations will arrive. I think this is why labs like OpenAI exist. I think what's happened over the last 10 years, and you see this among the AI researchers, where I'm privileged to work with some of the best in the world, of course, they're living at a time where their field has been completely transformed. I mean, it's pretty inspiring. Like as a fellow researcher, obviously from a different field, to see people who are in the midst of a time when their field is utterly just being transformed and that look in their eyes feel like anything is possible. I get it. I haven't lived through that in my field, right? But I can get what it must feel like. And I'm kind of watching that from the outside and seeing that. And so I do think that we have to temper that, of course, with the idea of how much more can you see increases in some of these dimensions, your point earlier. And then Tekken, is there a sense of a general intelligence that's going to be good at everything? Or are you going to have more specific applied models? I tend to be someone, because I work so much in organizations, to be very practical about this and say, okay, there might be really intelligent models. But when it comes to solving problems inside organizations, I happen to think that we're going to need some specialized solutions in finance or healthcare or education that are going to solve particular problems and maybe more importantly, aligned with regulatory and institutional realities. So I actually think the question is both how capable models become, but also can we find ways in those verticals to adapt them, to fine tune them for those application verticals to make them not just smart, but effective. That to me is almost as hard as making the models smarter themselves. Yeah, we wrote a paper, gosh, almost a decade ago when the analytics boom was out there about how much easier it is to make models faster and more sophisticated and better, but how much harder it is to get organizations to use that. And so there's a gap that's inevitable. Let me switch dark here for a second. You make a bunch of tools. We've been happy all this conversation talking about how science is going to use these tools to find amazing things, so are the bad guys, right? You making something that is in effect a tool and people will use tools for nefarious purposes What OpenAI gonna be able to do about that Is it you know I not naive enough to think that we can stop everything but I think we can slow things down. Well, I think there's tremendous interest, and really from the very beginning at OpenAI, around the risks to our safety that come from increasing capabilities of AI. I have numerous colleagues who are working full time to address these safety issues across a bunch of different dimensions. Everything from national security to mental health to thinking about how AI is going to affect participatory democracies and governments. All these things are things that people at OpenAI are working on. One thing I think is really fun working there, being an economist, is you find people in each area who are experts in their fields working to think about the impact of AI on their particular area of study. And it's also very practical people who are thinking about what the usage data is actually telling us, how to implement some solutions in the real world, as well as building on a lot of academic expertise. It's a really good combination. Safety is the core of the mission of OpenAI, a big, big topic of discussion internally that we have. And I think that OpenAI feels like disseminating information about the capabilities of the model, both what it's able to do that is really going to be exciting, but things that are going to be potentially problematic is really important. So I think the transparency piece is like a big part of the culture and also the notion that we're going to probably have to work together across organizations with governments around the world to really solve these safety issues. And my economics work overlaps with that a little bit, but it's mostly by watching these folks work in the organization. I can say that's the kind of approach they've taken. I like the idea that there's some economists involved to offset some of the technologies who might be pursuing things from a pure can we make it better standpoint without thinking about some of the resulting consequences. So we have a segment where I'm going to ask you a bunch of rapid fire questions. Just say the first thing that comes to your mind is AI making you spend more or less time with technology. I think it's less time for me because I answer a lot of questions quickly with AI, where I would usually use another tool and I now can just get the direct answer. That shocked me was a good example. I'd be playing around a lot of different websites, looking at different things. So for me, it's a little less. I don't know if that's true for everybody, but less. Okay. What's the worst use of AI? How are people using this in a way they shouldn't? Outsourcing your critical thinking. You know, this is something I take really seriously as a professor and someone who just loves to learn. It'll be a shame if people are using this to avoid going deep or not engaging with really difficult questions and just having a chatbot spit it out for you. I just I really hope for my kids that them and their colleagues and classmates as they get older are not going to do it that way. And we really have to work hard on that, everybody. But that's something that's the worst possible use that I can think of. I mean, let me put that aside of the really deep existential safety risk you talked about earlier. The outsourcing critical thinking is really, really important to me that we make sure that doesn't happen. What's frustrating you about AI right now? I think that the popular debate about it, I mean, in one sense, it's super exciting. People are knocking on my door and coming up to me at the cocktail parties and saying, so, you know, I can't be angry about that. There's a lot of interest. But the debate is pretty binary in a way, like AI good or AI bad. There's a lot of discussion around predictions and forecasts that are really hard to make. And so everybody wants to click through some sort of like list of different jobs and their likelihood. And the hard thing about this is you're never going to be called to account for making a good or bad prediction. So we just have predictions. And at the same time, I'm trying to do some of the work to show, hey, here's how people are really using it. But by the time you do that research and that work, it takes a while. And people say, oh, I already know that. That's exactly how I was using it. So I think we have to just continue to create space for real data, real analysis on how AI is being used, not just by the last, but by outside academics. And at the same time, let's all enjoy the stories about it because we need to read something every day with our cup of coffee. But the real news gets made when we come up with new insights from analyzing the real usage data and the way people are using AI. Okay. So, yeah, what's the best use then? So right now, and I don't know if it's the best use, but it is the use. We did the biggest analysis so far. We have 1.5 million conversations from Chachapiti. And we basically find that there's really three big use cases. One is seeking information. You can think about that traditionally like a web search. And that's how I found the genes, right? Then you get practical guidance, which is a really interesting one, which I think is something that economists need to think more about. But a lot of people are using AI just to inform decisions, make better decisions, right? Streamline decisions. And AI as a decision assister is really, I think, the key place that I think is being used really effectively now. And that creates a lot of consumer surplus, but doesn't necessarily show up in GDP. And the last piece is writing, which you think about most white-collar jobs, which are the focus of a lot of the job discussions out there. Most of those involve writing, whether you're a consultant or a financial analyst or a tech product manager. And so when it comes to writing, that's really one of the big uses. If you double-click into that, you'll find that most of the writing work is submitting something that you've already written or you want it edited or summary, not doing the thing where you're outsourcing completely to ChatGPT. So that feels pretty good. I want to do more work on the writing piece. But those are the top three uses right now on the consumer side of the platform. This is not the API or ChatGPT enterprise. And that's really important. I think, you know, people read this stuff and they say, oh, you know, so this is what everyone's doing at work. No, no, no. This is the consumer side of ChatGPT, you know, the accounts that someone like you or me would use on a personal side. I will say there's a lot of work being done on this, too, which is really interesting. But we have a whole other set of work to do to unlock how people are using it at the enterprise level. That's great. Actually, I feel like if anything that we've talked about today, we could talk for a whole episode on it. Yeah, I'll be back. Let me know. I'll be back. Well, thanks for joining us today. Sam, it's an honor to be here. Thanks for having me. Thanks for listening. Speaking of using AI in our personal lives, I'll be joined next time by Will Crosshorn, product manager at Wendy's. We'll be talking about how Wendy's is using AI to enhance the drive-through experience. Please join us. Thanks for listening to Me, Myself, and AI. Our show is able to continue in large part due to listener support. Your streams and downloads make a big difference. If you have a moment, please consider leaving us an Apple Podcasts review or a rating on Spotify. And share our show with others you think might find it interesting and helpful. Hi, listeners. We're running a short survey to learn more about our audience so that we can continue to bring you a podcast you find helpful. If you have a moment, please take the survey at mitsmr.com slash podcast survey. You'll receive a complimentary download of MIT SMR's executive guide, How to Manage the Value of Generative AI. Please take the survey this month at mitsmr.com slash podcast survey. We'll put that link in the show notes, and thank you for your help.