Digital Disruption with Geoff Nielson

AGI Is Here: AI Legend Peter Norvig on Why it Doesn't Matter Anymore

66 min
Nov 17, 20255 months ago
Listen to Episode
Summary

Peter Norvig, AI legend and former Google research director, discusses why the term AGI is less useful than focusing on making AI systems more reliable, safer, and capable. He explores the surprising effectiveness of scaling language models, addresses misconceptions about AI disruption, and emphasizes that transformative technologies typically arrive gradually rather than as sudden breakthroughs.

Insights
  • AGI has already arrived in the sense of general-purpose AI systems (like ChatGPT), but the transition was gradual—we didn't notice a singularity moment because technological shifts happen incrementally and we adapt to them
  • The real bottleneck in AI advancement isn't raw capability but handling the long tail of exceptions and edge cases that require deep organizational knowledge—one-person AI shops will struggle with domain complexity
  • AI will likely increase productivity gradually without visible GDP spikes, similar to how PCs and the internet didn't show dramatic economic impacts despite transformative potential
  • Job disruption from AI will be driven by speed of change rather than total unemployment—workers may need to transition between roles faster, requiring stronger social safety nets like UBI
  • Current LLMs succeeded because scaling and attention mechanisms work better than expected, not because we solved the hard problem of reasoning—future progress requires evolution, not revolution
Trends
Shift from narrow AI systems to general-purpose models that can handle multiple tasks without retrainingGrowing focus on AI safety and responsible development from inception, unlike previous tech wavesEmergence of agentic AI systems that can reason, experiment, and reflect rather than just predict next tokensDemocratization of AI tools enabling non-technical users and small businesses to automate workflows without hiring engineersGradual rather than sudden economic disruption from AI, requiring proactive social policy interventionsOpen-source AI models becoming inevitable despite safety concerns—focus shifting to defense and mitigation rather than containmentAI-assisted discovery in mathematics and computer science reaching competent graduate-student levelFlattening of organizational hierarchies as AI enables better information routing and reduces need for management layersIncreased demand for domain expertise and business acumen over pure technical AI credentialsThird-party certification and professional standards emerging as governance model (similar to Underwriters Laboratory for electricity)
Topics
AGI Definition and TerminologyLanguage Model Scaling and EffectivenessAI Safety and Responsible DevelopmentAgentic AI and Reasoning SystemsAI-Assisted Scientific DiscoveryJob Market Disruption and Workforce AdaptationUniversal Basic Income and Social Safety NetsOpen-Source AI Models and Security RisksAI Governance and RegulationOrganizational Hierarchy and AI IntegrationMisinformation and AI-Generated ContentCybersecurity and AI-Enabled AttacksSmall Business AI AdoptionProfessional Certification in AI EngineeringEconomic Productivity and GDP Growth
Companies
Google
Norvig was research director at Google; discussed as example of company with extensive AI safety policies and interna...
Stanford University
Norvig is AI fellow at Stanford and teaches at the Human Centered AI Institute founded by Fei-Fei Li
OpenAI
Creator of ChatGPT, referenced as example of general-purpose AI system that surprised researchers with its capabilities
Anthropic
Mentioned for research on AI safety and preventing models from providing dangerous information
Meta
Referenced as major AI company competing for PhD-level AI talent
Underwriters Laboratory
Norvig joined their AI safety board; cited as model for third-party certification of AI systems
Salesforce
Referenced as example of enterprise software that small-to-medium businesses use for workflow automation
ITA Software
Airline travel software company acquired by Google; used as example of long-tail complexity in enterprise systems
People
Peter Norvig
AI legend discussing AGI, language models, safety, and future of AI technology and workforce disruption
Geoff Nielson
Podcast host conducting interview with Peter Norvig on AI trends and implications
Stuart Russell
Co-authored 'Artificial Intelligence: A Modern Approach' textbook with Norvig; wrote article claiming AGI is already ...
Fei-Fei Li
Founded HAI institute where Norvig teaches; focused on AI's societal impact and human-centered design
Yann LeCun
Advocates for tearing down current AI approaches and starting over; contrasts with Norvig's evolutionary view
Geoffrey Hinton
Expresses concern that AI development is dangerous; represents more pessimistic view than Norvig
Eliezer Yudkowsky
Advocates for extreme caution on AI risks; represents far end of danger-focused spectrum
Eric Schmidt
Shifted position from opposing open-source AI models to accepting inevitability of their distribution
Cassie Kozyrkov
Created analogy comparing hiring PhD in AI to hiring PhD in stove design for restaurant
Terence Tao
Used AI to assist in mathematical proofs; described AI systems as competent graduate-student level
Joel Spolsky
Wrote influential article on why not to rewrite software; cited for insights on hidden complexity in systems
Quotes
"I don't really like the term. I think there's no clear definition of it. Everybody has a different idea of what it is, depending on what it is, achieving it will vary by five, six, seven orders of magnitude."
Peter NorvigEarly in discussion on AGI definition
"We made a transition in, say, 2022 or so, of going from writing programs that were specific, a program to play go, a program to recognize images and so on, to programs that are general."
Peter NorvigDiscussing when AGI arrived
"It'll get better and better. There won't be one point when we say this is the transition. It'll just do more and more."
Peter NorvigOn gradual AI advancement
"The bottleneck doesn't really seem to be creation of the junk. The bottleneck is building up the networks that can get it propagated to others."
Peter NorvigOn misinformation and AI
"I think there's real dangers and I think bad things are going to happen. But I think overall, the good will outweigh the bad."
Peter NorvigOn AI risks and optimism
Full Transcript
Hey everyone, I'm super excited to be sitting down with AI legend Peter Norvig. Peter is a former research director at Google, an AI fellow at Stanford, and the author of the most important text on AI of the past 30 years, Artificial Intelligence, a modern approach. What's so special about Peter is that he's sat at the forefront of AI research and teaching for over three decades, so he hasn't just pushed the technology forward, but educated an entire generation of AI leaders. I want to ask him how close we are to unlocking the true potential of AI, what he's most worried about, and what we need to do to build the future we want. Let's find out. Peter, you're the author of, you know, the pre-eminent, if I can call it that, textbook in the AI space, artificial intelligence, a modern approach, which has recently turned 30. And so this is an area that you've been thinking about, you know, since at least 1995, I'm sure a lot longer. As you think about where the technology was then, where it is today, one of the things I'm hearing a lot about these days is a lot of hype around, you know, oh, we're only one to two years out from artificial intelligence reaching its final form, or being AGI and having this full potential. Do you believe that? How far has this technology come since, you know, you wrote the first edition of this book, and how close are we to the modern version actually achieving what is, you know, the complete promise of this technology? Yeah. So we have seen amazing progress in the last couple of years. I do think it's ironic that, you know, 30 years ago, we titled this book a modern approach, and we kept the same title, so I don't know how it can be modern both 30 years ago and today. And it does seem like, you know, textbooks seem obsolete now because they come out on a cycle of several years and AI is advancing on the cycle of several weeks. And so it is exciting what's been happening the last couple of years. I think unanticipated by most, certainly unanticipated by me. And just this idea that scaling up in data and processing power and with a few very clever ideas for algorithms made such a difference. So I think that's really different. In terms of AGI, I think, you know, I don't really like the term. I think there's no clear definition of it. Everybody has a different idea of what it is, depending on what it is, achieving it will vary by five, six, seven orders of magnitude. And I guess I feel like there's not going to be a moment when we say AGI is here. I don't believe in sort of this heart takeoff idea. I think we'll get better and we'll just get used to it. And I think past technologies have been like that, right? So if we had all of a sudden gone from the days when if you wanted to learn something, you had to drive to the library to the days where you have a machine in your pocket that gives you access to all the world's information. If that had happened in one day, people would say, this is an incredible singularity and transformation. But it happened gradually and we just got used to it. And so I think it'll be the same with AI. It'll get better and better. There won't be one point when we say this is the transition. It'll just do more and more. Now, Blaise Gariarka and I wrote this article a year or so ago in which we said AGI is already here. What we meant by that was not that the machines we have now are perfect. They're certainly flawed in many, many, many ways. But if you take the G seriously, we made a transition in, say, 2022 or so, of going from writing programs that were specific, a program to play go, a program to recognize images and so on, to programs that are general. So ChatGPT and the like can do lots of things that the inventors never realized. And we liken that to the invention of the computer, maybe going back, say, to the ENIAC in 1945 or Van Nijmund's MANIAC a couple of years later, where they were 100% general. Now, they're terrible computers by today's standards. They're big and clunky and slow and have no memory and slow processing speed. But if you have a conditional statement, a branching statement, and a sequential statement, and you can read and write memory, then you're 100% general. You're as general as a Turing machine. You can't get more general than that. And so in that sense, we now have programs that are general. We write them. They can do things we didn't think of before. So that's general. They're imperfect and they'll get better and we'll play with that technology. But I don't see having AGI as the focus, as being that helpful right now. I'd rather focus on how can we make them better? How can we make them more reliable? How can we make them safer? What else can they do? I appreciate that distinction. And so with that in mind, as we look at generative AI and tools like LLMs, what's your reaction to that? What's your reaction to you use the word imperfect and these sort of imperfect chatbots? And maybe that's something that will never go away and something that we have to get used to. But it sounds like you're leaning toward acknowledging that they are AGI in the way that we would have described it 10 or 20 years ago. Yeah, I think that's right. So, you know, just as I was saying with the phone in your pocket, if we had gone in 1990 and, you know, said all of a sudden, here's the chat APT of today, I think everybody would say, OK, AGI is here. You know, this is an incredible leap. This is AGI. But because we got it gradually, there's been resistance to that. That, yeah, that makes sense. And so, I mean, if you look at the progress and you mentioned that you yourself were surprised by this, what specifically did you find, you know, maybe most surprising or, you know, was something that you didn't see coming? Is it? So, you know, you said, Stuart and I started writing the book in 1995, but I went to grad school in AGI in 1980. And my topic was natural language processing. And the way I looked at it, I said, well, there's two problems. One is there's written words on the page. We have to figure out what they mean or how to process them. And we saw that as a problem in linguistics. We had to figure out what's the syntax of this language, what's the definitions of these words, how they relate to each other. And we said, OK, we think we have a handle on how to do that. But then we said, then there's a lot more that's going on up here in the head. And we said, I don't know how to do that. That's going to be the hard part, right? We're going to get the linguistics part right. But then, you know, how do you react to a sentence? What does it mean? How does it relate to another sentence? That's going to be hard. And that's going to have to figure out what's going on up in the head. And then we built these LLMs by saying the thing we're going to put in the head is very sort of broad priors that have the capability to pay attention and learn, but not much else. Otherwise, it's kind of a blank slate. And then we're just going to push billions of words past it and it worked. And I think nobody really anticipated that that would work. We thought there was going to have to be a lot more going on and figuring out how thinking works and not just passing a lot of words past it. Do you see the... So I'm glad you said that. And it's aligned with kind of my thinking and some of the thinking I've heard from other, you know, kind of AI leaders in this space. The approach of passing a lot of words through this tool. Do you see that as kind of a continued trajectory to where we need to get to in terms of the next generation of... You know, the phrase that comes to mind is AI as agents or agentic AI, which is something you and Stuart have been talking about for a long time. Yeah, I guess, you know, you can look at the picture and react in different ways. Some people say, well, we have to tear it all down and start over again. And some people say, well, we just have to continue to make it better. I think it's interesting that Jan Lacoon is on the tear it all down. And I'm on the let's continue to make it better. But other than that broad philosophy, I think we're very aligned in our views. So we both say, here's some things that are missing. We have to be able to do reasoning better in certain ways. We have to have a better connection to the physical world and so on. And I see that as evolution and he sees it as revolution. It makes sense. And I think a useful distinction there to say, OK, well, we're still broadly talking about the same thing. So when you look out over the horizon about where this is going and there's no shortage of benefits or challenges on each side, what's sort of top of mind for you in terms of what we need to get right next? And how is that influencing where you're spending your time and mental energy? Yeah. So we want to build tools that are useful and don't make dangerous mistakes. So I think we're on the right track. So a language model by itself seems like it's not all that useful. Right. And so there's a very specific mathematical definition of what a language model is. It's a probability distribution over the next words, given the previous words or the context. And just having that doesn't seem that useful. And when we had the first chatbots, it was like you give it a prompt and then it says, OK, what's the next word I'm going to spit out? And then what's the word after that? And then what's the word after that? To me, that didn't seem like artificial intelligence. That seemed more like artificial politician. Right. They're really good at spitting out one word after another without pausing to think. But now we have these models that do more reflection. So rather than just saying, what's the next word I'm going to spit out? It says, let me try 10 different lines of approach. See where they go, criticize them, compare them, vote. And maybe in the future, run some experiments, look something up, see how it relates. And only when I've done all that, now I'll start responding. And that seems much more like intelligence. And we're starting to see those kinds of approaches. But we've got to figure out how to do it better. If you work in IT, InfoTech Research Group is a name you need to know. No matter what your needs are, InfoTech has you covered. AI strategy, covered. Disaster recovery, covered. Vendor negotiation, covered. InfoTech supports you with the best practice research and a team of analysts standing by, ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe. So coming back to the, I guess the concern piece, you mentioned we have to make sure that things don't go too off the rails with these technologies. What do you see as some of the, I guess, more and less realistic risks for this technology in the next handful of years? Yeah. So I think there's a lot of issues, right? And I think we don't have the best record with that, right? So, you know, as an industry, we invented social media. We saw some of the advantages of bringing people together. But I think we didn't do a good job of foreseeing all the problems of addiction and misinformation and so on. And I think anytime you have a powerful tool, it can be used for good or for bad. And, you know, and maybe it's a flaw of the tech industry that they tend to be maybe more idealistic and less connected to the real world and optimistic and and see the good uses and don't defend enough against the bad uses. I think we're in a pretty good place in AI because right from the start, there's been all this talk of AI safety. You know, I wish we had done a better job of that with other technologies. Right. So when the internal combustion engine was invented, that was a great thing for humankind to provide all these services and transportation and get food to people. But we didn't really foresee all the effects of pollution and so on. But here, sort of right from the start, it seems like AI is concerned with that. I'm worried about the misinformation type issues to some degree. I guess a large part of it, I feel like, well, we've already got that. We've already got cheap labor that can generate junk and push it out there. And the bottleneck doesn't really seem to be creation of the junk. The bottleneck is building up the networks that can get it propagated to others. So I don't I don't see AI as fundamentally changing it. It's just another tool that the bad guys can use. So that's an issue. I worry about sort of empowering bad actors to have more powerful technology. Right. So, you know, a lot of work has gone into if you ask Chatbot, how do I make a pathogen that will kill a billion people? It's supposed to say no. I won't do that for you. But we have open source models that allow you to get around that. And fortunately, Anthropic and others have done this research and said, well, kind of the things you need to know aren't really out there. And so neither the search engines nor the chat box really have access to that. So that's a little comforting. But you can easily imagine, you know, and now I'm in the future saying, that's an interesting question. I don't know the answer to that. But from my knowledge of biochemistry in general, here's some ideas. You could try this and maybe it could guess right. So that would be a bad thing. And I think we're seeing this in general, not just with AI. We're seeing sort of the the dumbing down or the cheapening of the ability to impose your will on the world. Right. So 50 years ago, if you wanted to impose your will, you kind of needed a big aircraft carrier group that you would send off to threaten other people. Now, a much cheaper set of drones can be as effective. So technology has made the ability of smaller groups to do more attacks. And that's a danger and AI helps helps make that even more powerful. I'm also worried about income inequality. And again, that's not uniquely an AI problem. It's an inherent problem in digital technologies where the cost of reproduction is near zero. And that tends to concentrate wealth in the hands of a few. And I think that's dangerous for society. I think that's well said. And as I process both sides of what you said there, starting with, yes, we seem better positioned than with a lot of new technologies to actually be investing in and caring about the safety. However, there are a number of varied risks that we can't necessarily get around. Certainly compared to some of the people I speak to here, it sounds like on balance, you're gently optimistic about this and our ability to get our arms around this versus some of your peers being in a... And by the way, the one that comes to mind is Jeffrey Hinton about we're completely screwed here. Is that kind of gentle optimism? Is that reflective, would you say, of where you're at? I think so. I mean, I think there's real dangers and I think bad things are going to happen. But I think overall, the good will outweigh the bad. And Jeff's got his set of concerns. There's certainly people like Eli Yudkowski or even farther out on the danger side. I understand you're doing some work now with the Human Centered AI Institute. I got that right. Good. And working with folks over there like Fei-Fei Li, what sort of mandate does that institute have and is the work over there being done that you think is helping to address some of these challenges? Yeah. Right. So this is an institute at Stanford, founded by Fei-Fei Li and some colleagues looking at how AI affects society and how we can make it work for people. And so, you know, there's also this design school at Stanford. And so I see that HAI is kind of a continuation of that. How do we do design products that use AI that will be useful for people, will augment them rather than replace them, that will be fair and unbiased and will be easy to use. So that's what sort of the charter of the institute is. I'm teaching a class there. The institute does a lot of policy type work. So we recently did a boot camp for Congressional Aids and tried to teach them, here's where the card state of AI is, here's the kinds of things you might be worried about, both in terms of promises and threats and in terms of what possible legislative role Congress can play. So I think that kind of more, not really advocacy, is more education is part of the role of HAI. What kind of role on that note do you see Congress potentially playing? I think most of the issues could be already covered by existing law, right? So most of the time when you do something bad, it's the fact that you did something bad, not how you did it. Right? So we have rules that murder is bad and we don't have specific rules for murder with a particular technology. Although that being the case, there are places where we single out specific technology. So we do say that if you use a gun in certain cases, that changes the nature of the crime. Um, so I think, uh, you know, government has a, has a role to say, how are we going to use this in a way that doesn't take advantage of people? Uh, how are we going to share those benefits across and how are we going to let it grow, uh, while also keeping it under control? And I kind of feel like there's a lot of players involved and I don't want to put all the, uh, the emphasis on government. So, so yes, they have a role to play. Uh, but, uh, in my experience, government tends to be, uh, reacted to speed that's slower than the speed that AI is going. So I worry about that. Uh, so I think other players are important too. So, uh, uh, self-governance is important and all the big AI companies have their AI policies. Uh, and, and from my experience, it's taken pretty seriously. So, you know, internal to a company, they say, what can we try doing this? Uh, well, no, we can't because we have to clear this first, uh, because of our policies and I think that works well. Um, I think there might be a role for professional societies. We haven't had that before in computing. All right. So I get to call myself a computer scientist and, you know, and I have some degrees and some experience, but I don't have any, anything official. And anybody could just say, all right, I'm a computer scientist or I'm a software engineer and I'm going to release some software and they let you do it. It's great. Uh, in other fields, they don't do that. I couldn't go out tomorrow and say, you know what, I'm going to call myself a civil engineer and I'm going to go build a bridge. They don't let you do that. You need to be certified in order to, uh, to do those kinds of things. I don't want to slow down the software industry, but I think there might be a role to say, if you get to a certain level of power of these models, maybe there should be some certification of the engineers involved. And then finally, I think, uh, external third parties are play, can play an important role. So I actually joined an AI safety board with underwriters laboratory. And I thought they were interesting because the last time we had a technology that the public thought was going to kill everybody, it was electricity. And everybody was worried they were going to get electrocuted. You could, you know, you can see some of these vintage cartoons, uh, showing death and destruction. And then underwriters laboratory came along and said, we're going to put a, a little sticker on your toaster or your microwave. And it means it's not going to kill you. And consumers trusted that. And because consumers trusted it, companies voluntarily submitted themselves for certification. And that seemed like a good thing. And I think maybe these, uh, third party nonprofits can be more agile than a government can in setting right to collation. The, um, the comparison to electricity is a really interesting one. And, uh, you know, I won't drain the story here, but I'm sure you're as familiar, if not more familiar than most about the story of alternating current and direct current and Edison and Tesla. And it's funny, I've never reflected on it before, but it seems like you could draw some parallels with those competing standards with some of the big players in AI and some of the narratives they have about their competitors right now and how safe or unsafe their models are. A little bit, although, you know, it isn't to the point where we have to say, you know, we have to choose one standard that everybody's going to be hooked up to. Well, that's what, that's where I was going with this is, is one of the narratives is like, Oh, this is an arms race and one person will get there and everybody else will lose out. Do you see it as being winner take all? Or do you see it being more of kind of a long tail of different models and technologies that are more fit for purpose? Yeah, I, I, I see it as not being winner take all, but I see it as a few winners take most. Um, so I guess there's two issues. What is, uh, you know, some of these futurists or single attarians are saying, well, there's this hard takeoff scenario where, you know, one team figures out the magic so that its model doubles in a week and then it doubles again in a day and then it doubles again in an hour and then the rest of the world's left behind. I don't really believe that. Um, you know, so I think so far we've seen parity among the top groups and I think we'll continue to see that. So, so I don't worry so much about one dominating because they have a technology that leaves everybody else behind. Uh, if you'd asked me three or four years ago, I would have said, well, it's going to be a very small number of players because there's only a few, uh, providers who have enough data centers to build these really huge models. Um, and I think those few companies will do really well, but I don't think they're going to capture everything for two reasons. One, we've seen these much smaller models become very capable and we've seen more demand for kind of privacy of saying people saying, uh, well, I want something on premises because I don't want my data or my queries to, to go outside. So I think, uh, you know, yes, the big companies are going to capture a lot of market, but there's going to be lots of other ones as well. There's, there's an interesting tension in my mind between, you know, fully democratizing some of this technology versus keeping the really important stuff contained within a few different companies. And I think I've heard you say before that, you know, one of the pieces that concern to you the most is open source AI and what, you know, a bad actor or an organization could do with some of these models. If there isn't enough safety or regulation there, where does that balance out in your mind? Like when we have to balance it being broader versus more contained, what, what is, I don't know, what, how would you answer that question in terms of where we should draw the line? I mean, I guess my feeling is it doesn't matter what I think, because the cat's out of the bag. And, uh, I think you were right that I was hesitant. You know, again, uh, mentioned you on Lacune, he's really pushing hard for these open models. I was saying, you know, wait a minute, maybe be good if somebody's making a query to do something terrible that it gets logged somewhere. Uh, and I guess another person I can mention that I've seen the shift in is, uh, my colleague Eric Schmidt, who was very adamant of saying, we can't have open models because of the threat from bad actors, uh, you know, two or three years ago, and now he's, he switched and said, it's too late. Uh, these models are powerful enough. The bad actors want to use them. They can create them. So we might as well harvest the good of the open models because the bad guys have kind of gotten them anyways. Uh, and I think that's right. I think, I think there's nothing you can do about that now. So, so what are the, the implications there? Given that the cat is out of the bag, what do we need to do? I guess as, uh, you know, calling it an industry is a bit strange, but as kind of an ecosystem to reap the most benefits or at least, uh, you know, mitigate our risk. I mean, I guess what we got to do is, is say, uh, be aware that here's another attack factor, right? And I think sometimes some areas we've done a really bad job. So in terms of cybersecurity, uh, as an industry, uh, we haven't really focused on that and there are a lot of losses of bad guys coming in and sealing data and extorting and so on. Uh, and we've, we've accepted that trade off. We've said, we want the industry to move fast. We want to provide you with all these tools and we'll do that without guaranteeing that they're safe and we'll accept those losses. Uh, and maybe that was a good choice, but I think now maybe we can think a little bit harder to say, well, if there's more powerful attacks, maybe we should build our systems into a more reliable standard. And I guess I'm kind of optimistic there. So I'm not an expert in cybersecurity, but I talked to the people who are, and they kind of feel like, yes, this will be good tools for the attackers. Uh, but they think it, maybe it's a better tool for the defenders. The reason the attackers are successful is because we build software that's just full of holes. And if we can have, and you know, the systems we build are too complex for a human to understand, but it seems like maybe an AI system could understand it. And we could ask the AI, you know, here's this million lines of software, analyze where the holes are and tell me how to fix it. And if we can do that, uh, then the attackers have a much harder job. Right. So it's almost like AI can raise the tide for all ships there. And, you know, that it's, it's an interesting perspective. And it's one I appreciate because I feel like there's so much, there's so much fear these days of, well, every, every year in all things cyber is going to be riskier than the last. And you're saying, well, maybe, maybe that doesn't have to be the case. Yeah. I think we can make things more secure. So, yeah, you know, kind of adjacent to that, one of the under currents of all of this thematically is just speed and everything is accelerating. It's going faster and faster. And who knows what model will be out tomorrow that's not out today. Um, and, and I think about that in relation to, you know, an article that you wrote a number of years ago that teach yourself programming in 10 years, which was, you know, in some way, a response to be this, you know, addiction to speed and just everything is now, now, now. Do you, you know, given when that article was written and where we're at, do you still hold on to those, you know, principles or do you think something has fundamentally changed? Uh, so yes and no, right? So I still think if you want to really understand the field and software engineering, which could be one example of them, but, but in any field, uh, you do have to put in that work and, you know, it, it may not take exactly 10 years, but you, you're going to have to put in a lot of time and you're going to have to study and you're going to have to deeply understand things. On the other hand, there's a lot of things you can do without that deep level of understanding. And, and now it seems like programming is one of them. Uh, so, uh, I have no issue with, uh, you know, many people, maybe the majority of people saying, you know, it'd be really cool if I had a piece of software that did this, let me chat with this chat box and build something that seems to work. Let's go. Uh, I think that's going to be fine. I do see kind of a, uh, a generational schism on this. Right. So this happened a couple of times with me and working with a younger colleague at work and there's, we discover there's, Hey, here's this new software package seems to do what we need. Uh, great. You know, let's figure it out. And so I sit down and, you know, I'm reading through the manuals and I'm taking some time and I have questions. And then my colleague comes back in and says, okay, I'm done. Let's go. And I say, what do you mean you're done? I said, well, I figured out, you know, we call this method a, and then we take this result and then we call B and then we have it. And that's the answer. Let's move on to the next problem. And I say, but, you know, I don't understand this package. How does it do X, Y and Z? And they say, no idea, but I know this will give me the right answer. And I feel like there is this trade-off of sometimes it's really important to deeply understand something or else you're going to get bitten later. You know, sort of you building up technical debt. If you don't understand what's going on. Other times completely understanding them is not important. And just getting the right answer is important. And, uh, it's hard to make that trade-off. And I feel like in these kinds of, of instances, neither of us is making a rational trade-off, right? So I like to study and understand because that's what I was used to. And my younger colleague likes to go fast because that's what they were used to. And, you know, maybe one of us was right. Maybe the other was right. Maybe the, the right place is somewhere in the middle. Uh, but it's hard to get the experience to know, right? You only got so much time. What do you end, what are you going to understand completely? And what are you going to just accept and, and move on and, uh, say, if I get the right answer, it's okay, even if I don't understand it. And I think, I think it's really hard to get that right. And it makes sense. It's a, yeah, it's a tricky balance and when, when is good enough, good enough. So when I think about that, when we talk about computer science as a discipline, you know, we've, we've already talked about everything from it's easier and faster than ever to, well, maybe we need to start thinking about credentializing this more for the safety of the greater good. Well, in your perfect world, how would you like to see, you know, AI and some of these, you know, advancements in technology, where would you like to see them take computer science that's going to be most beneficial for everybody? Yeah. So I think that's really interesting. You know, we're starting to see, uh, some advances or even some AI assisted discoveries in computer science and math. Uh, there's been, uh, announcements recently from, uh, Terrence Tao, uh, you know, one of our foremost mathematicians and others of saying, you know, here's, uh, AI assisted proof that, uh, that I came up with. And, uh, Tao has said, uh, these systems seem to be at the level of a not incompetent grad student. Uh, uh, so that's kind of promising. And, you know, maybe soon they'll be at a pretty good grad student. Uh, so they can do things that we couldn't do before. And, and I feel like it's kind of the first time we've had a technology like this, because all our other tools have been, if I can formalize something, then the tool can help me, uh, work through these, these, uh, formal calculations. Once I've done the hard work of, of describing what I wanted to do. And it feels like the current AI systems are the first ones that can help us in that process of saying, I want to go from a messy semi understanding to something that actually works without having to do all the formalization myself. Right. And so, you know, we've been burned by that in the past. Right. So you go back in the, the 1980s and there are all these predictions of saying, Oh, you know, worker productivity is going to go up so much because we have PCs now and, you know, we have spreadsheets. So, uh, you know, 90% of the accountants will go away because it will all be automated. Well, that didn't happen. And, and why didn't it happen? Well, it's true that if you want to add up a long column of numbers, a spreadsheet is a great tool. But most of what the accountant was doing was not adding up the numbers. That was a very small part of their job. The big part of their job was knowing what numbers go where and here's this expense and what column does it go in and so on. And we couldn't automate that part. Uh, and so therefore productivity did not go up that much. Uh, but the AI systems we have now seems like the first tool that maybe they can do that messy kind of thing. So, so is the implication there, you know, I want to tease this out a little bit more because there's certainly these days it feels like there's a lot riding on, you know, the productivity is going to skyrocket across all these organizations. And, you know, on the one hand, there's not so fast it hasn't before. And on the other hand, there's, well, this time may be different. What, what, what's kind of your prediction? Like, is this again, just it gradually over time, it'll trickle out or, or what will the productivity gains, if any look like? I think it'll be gradual. You know, if you look at, uh, GDP by year, uh, well, first we have, uh, you know, tens of thousands of years where it was pretty flat. Uh, uh, but then we got the industrial revolution and it started to go up. And if you look at like GDP in the U S over the last hundred years, it's kind of pretty steady line with a little wiggles in it. And you can really only see, uh, two events in the last hundred years. And that's the great depression and World War two. And everything else is, there's a little dip, but then it goes back to the trend line. Uh, so it, it feels like technology is giving us, uh, faster progress than, than we had pre-technology. Uh, but no one has been, uh, kind of instrumental. You know, and you would have, you know, as a computer scientist, maybe I thought, well, when you get PCs and offices, that would make a big difference. You can't see that on the, on the chart of GDP. And maybe GDP is measuring the wrong thing in certain ways, but, but basically, uh, uh, you can't see that much difference. Now, another, uh, chart that I think is interesting to look at is look comparing, uh, China to the U S. So if you compare China to the U S, uh, in the, say in the nineties or so, uh, they were up at like 10% GDP growth and, and now we're two or three. And I think what was happening is they were kind of reaping these technology benefits all at once. Right. So we kind of slowly said, well, computers are coming in the 70s, the 80s, the 90s. Uh, China was not deploying any of that technology. And then they kind of did it all at once. And that gave them, uh, about a 10% a GDP annual growth. So I think you could see a case that AI could do a similar kind of thing. Um, I was at this meeting of, uh, economists and AI people and they took a poll of what do you think GDP is going to be? I think it was 20 years from now. I forget the exact number. Uh, and, and the median was about 10%. But the range was a thousand percent to the complete destruction of civilization. Uh, so, uh, we got some wide error bars on that. That's, that's a very generous way of putting it. So let's, um, let's stay on, on the China piece for a minute and what can happen when you have these more open source technologies and when you can have these kind of leapfrog technologies, I guess, that, that help China bend the curve upwards. And so I'll ask the question broadly, but you know, what's, I guess your concern level or the concern level that you think the average American or Westerners should have both in terms of the overall health of the economy, as well as the health of the job market, given that, that, you know, these technologies are being increasingly available outside the, the walled gardens of the West. Yeah. Uh, yeah, we're certainly seeing shocks to the, to the job market. Um, I think mostly what we're seeing so far, I think has not that much to do with AI. I think it has more to do with, uh, uh, sort of the recovery from COVID. We had a big up and down and that is still, still reverberating from that. And so I see, you know, students at Stanford, uh, come to me and say, uh, well, I didn't get a job offer yet. What's happening? And, you know, in the past, they would come and say, well, I got a job offer from each of the top six companies and from these four startups, how do I choose between them? Right. So that's a big difference. Uh, so far, everyone eventually gets a job. Uh, so the difference is, you know, rather than getting 10 job offers, they're getting one and it's taking a little bit longer to do that. And I think that's because, uh, you know, during COVID, everybody was online. Companies over hired. Uh, now we're coming out of that and there's some threats to the economy and companies are cutting back. Um, I don't think we've seen the, the full effects of AI yet. Uh, certainly you do see it in small places. Uh, you know, I was just talking to someone who said, uh, well, we used to have, uh, an artist on staff to like make up the logos and, uh, PowerPoint slides and so on. And, and now we don't have that. We've seen that kind of thing before, right? So there used to be, uh, you went into, uh, typical office and the bad men, uh, 1950s error and there'd be a lot of people whose job was typing. And now we have much less of that. And, and most people are expected to do their own typing. Uh, and so now maybe doing art will people will be doing it on their own with the help of these tools. So I don't think that's a huge effect on the economy. I guess, uh, there will still be plenty of jobs. It might be harder to find some of them, but I think the main, uh, issue will be the speed of the disruption. And, you know, so we've seen this disruption before. It used to be the majority of Americans were farmers and now we went to only a couple of percent are farmers. Uh, but that happened over generations. And now we're seeing changes happen over months or years. Um, and that may, maybe too fast for people to adapt to, right? It was okay to say, well, my grandparents and my parents and I were all farmers, but it looks like my kids are going to go off to college and take a job in the city, uh, good for them. And it's much harder to say I had one job and I was laid off by AI and then another, and then another, and then another. And now I'm pretty pissed. Uh, so I think we're going to have to deal with that. I've been a lot of these proposals for some kind of universal basic income or something like that. I don't know exactly what the right thing is, but I think we do need, uh, better kinds of social safety nets because there will be this kind of disruption. So it sounds like you're concerned then that this isn't just just part of the cycle and reverberation, as you said, from COVID, but that the, we may see at least in the medium term, a lower, or call it a higher unemployment floor before we get back to whatever the new version of farmers is. Is that fair? Yeah. I mean, I don't know if it, about total, uh, employment or unemployment, but I think there'll be more as disruption, right? So, uh, you know, it may be that you're employed, but you may have to move from job to job faster. So, you know, you could see the economy going up, but everybody feeling worse, uh, because they're nervous. Right. Yeah. And, you know, I kind of feel like, you know, we, we had this invention of the full-time job and I see that as, as kind of a mutual insurance policy, right? So why, why do we get insurance? Uh, well, you know, the expected value of insurance has to be negative for the individual and positive for the insurer. Uh, but the value to the individual is to even out the ups and downs. Uh, and I see, uh, you know, a full-time job is like that. Right. It's probably not the case that the most value I could provide to the world would be staying at one company permanently. And I'll probably, if I split my, uh, my work between multiple companies, that would be more effective for the world as a whole, but it would be a cost on me to have to go out and find these gigs and never know where my next paycheck was coming from. And so we accept these, uh, sort of suboptimal use of resources to have this steadiness and even things out. And if we're going to start losing that steadiness, we're going to need some other, some type of insurance or guilds or, uh, UBI or something, uh, to make people feel more secure. It's a, it's an interesting point. And I've, you know, as you were talking about it, I was thinking, I really like the analogy and, and I guess for the employee, there, there are benefits. To the employer as well for full-time labor. I'm thinking about, you know, trust and, and, you know, switching costs in terms of hiring costs and firing costs. And if you just bring in all these fractional people, how useful is that? But, um, it does sound like in this world where it's a combination of people plus machines, if I can broadly call them that, if I can just kind of throw that back at you, it sounds like when you think about the future of work, you see it as less rigid, more flexible, if I can choose an optimistic word, but not necessarily in a way that's fully beneficial for the employee. Is that fair? Or how would you, how would you call it? I think that, I think that's right. So I think, uh, less rigid, I think is an important part of it. And I think this kind of communication that, that AI enables is an, is an important part of that, right? So, you know, why do we have hierarchical structure? Uh, because, you know, it was impossible to have all N squared interactions. And so we have the interactions flow up and down through a hierarchy. And, and now there's only you would, or N instead of order N square. Uh, so that was important. Uh, but if we have these AI systems that can route the right information to the right person, then we need less hierarchical structure. Um, and so that's a possibility. But if that hierarchical structure means, uh, you know, now you have a job, then tomorrow you don't, uh, then that's stressed on the individual. Well, and, and I want to come back to the question of social nets and universal basic income, because there's an implication there, and I'm not coming out one way or the other on it right now, but there's an implication that the economics of artificial intelligence are creating a world where there's more value capture, if I can call it that, by the firms. And then there's, there's got to be some sort of redistribution of wealth that helps the people at the bottom who are struggling. Is that the world you envision in terms of sort of winners and losers, or, or would you paint it differently? Uh, yeah. So I do think that in general, automation allows firms to capture more value. Uh, and then I think we want to, uh, you know, sort of, uh, figure out where that, who gets credit for that value. Uh, and, you know, and there are a lot of cases we're working through now and things like copyright law. I'm not sure copyright is exactly the right thing to be worried about. But, uh, you know, uh, who deserves credit for this stuff that we've built up? And, and how do we allocate that, uh, fairly? Right. So, so maybe let's zoom out for a minute and who deserves credit is such a, such a big, difficult question. And so I'll ask maybe a slightly easier one, which is just broadly, who do you see as being the winners and losers of this disruption? I guess, you know, the losers are people who had a safe position now that was, uh, protected by various kinds of moats. And, and now we'll see challengers come in. Um, and then the winners will be, uh, those that are, uh, agile enough to, to exploit that, to use the technologies and see opportunities. So if you, and I don't know if you've been in this position, but if, if you had a CEO or a CTO come knocking on your door and say, Peter, I'm really interested in AI in, you know, just, just basically the advancement of some of these technologies and how my firm can capitalize on this and what we need to, to get ahead either in our industry or in terms of, you know, modernizing our organization. What advice would you give them and what would you tell them to watch out for? Yeah. So I do a lot of that, right? So one of the roles at, at HAI is, uh, educating companies as well. So we've done a bunch of that. Also at the Google for startups, uh, we mentor, uh, startup companies are usually a little bit more technologically savvy, but are asking some of those questions. Um, I look at it as saying, uh, let's not think of AI as being unique. Let's think of what are your market opportunities? What are the tools you have to address that? Uh, how can you make your organization more efficient? What are your goals that you're trying to achieve? And if you can lay that out clearly, then we can start looking at saying, here's a place where AI can play a role within that workflow. I like that. And it, it makes a lot of sense to me. And you know, in some ways is seeing tools as tools, like as versus seeing them as being be all end all's. Are there any views or assumptions you're seeing people coming to you with as patterns about AI that you're like, that is just totally wrongheaded? Or I guess if I can ask that another way, are there particular flavors to the hype that you're saying, like, just look out for that, ignore that, that's going to put you on the wrong track. Yeah. So I think we've seen, you know, pretty fast evolution in sort of the public eye. If you go back like three years, uh, every article about AI had a picture of the terminator robot with glowing red eyes, right? And it was robots are going to kill us all. Uh, now we're seeing less of that. Now, uh, the robots are only going to steal your job. They're not going to kill you. Um, one thing I see a lot, you know, so talking to the CTOs and so on, you know, they come to me and say, uh, well, I need to hire a PhD in AI, uh, but I can't get any cause, uh, Google and meta already hired them all. Uh, and, uh, my colleague, Cassie Gasser off has a great analogy. She says, that's like saying, uh, well, I'm the owner of a restaurant and I need to hire a PhD in stove design. And the answer is no, you don't need that. Uh, what you need is a chef who will tell you what stove to buy, understands how to operate the stove and knows what the customers wants to eat and can make that. And so I think that is a flaw that some people see sort of the cutting edge of AI research and saying, you know, every company has to be doing that. And instead of saying every company should learn how to use the appropriate tools and fit it into what they're trying to do. I really like that. And I'm chuckling too, cause I just spoke with, I just spoke with Cassie very recently as well. And, and, and we had a great, she's fantastic. We had a great conversation. Yeah, she's great. Yeah. She was a lot of fun. So just, just taking that, that analogy to its logical end though, if we think about who the chefs are in this AI world, are those still technologists? Are they business people? Are they somewhere in between? What, what, you know, skill set or role should we be looking at if we're CTO, CIO? So yeah, so I think that's really interesting, right? So certainly, uh, somebody who's savvy in technology can do more faster. But I think a really exciting thing is for the non technologists to be able to get ahead, right? And so I think particularly in the small business, right? So say, you know, you want to automate your, uh, your workflow in your business. If you're a big company, you have an internal IT team and you give them projects. If you're a medium company, you hire sales force and it's more expensive and you pay these consultants and they do it for you. If you're a tiny company, there's nothing you can do. Right. You don't have any programmers on staff. It's too expensive to afford that. You can't afford sales force or any other competitors. So you're stuck. But it does feel like now I'm in that small company. We've only got two salespeople. They can sit down together and they can prompt and say, here's what we do on a daily basis and they can build something that will automate their workflow. And that was never possible before, right? Before you had to be a real programmer to do that. Now it seems like you can have pretty good luck, uh, doing that kind of thing. Uh, and this is the first time we've seen that sort of possibility. Yeah. And it seems like a real boon. I mean, frankly, not just any organization, but to a lot of different roles. If everybody's, if what everybody can do with technology increases, right? Everybody's more kind of enabled individually. Yeah. You know, on that note and staying at kind of, you know, you talk about, you know, enterprise IT and we're talking about CTOs. One of the big trends we've certainly seen, I probably can say over the last couple of decades or even beyond is just the proliferation and the sprawl of, you know, corporate technology functions, corporate IT. And suddenly we have to add a whole, you know, data governance function and a whole cybersecurity function and a whole infrastructure function. Yeah. There's, you know, in my mind, sort of competing futures of whether as technology becomes more central, these functions continue to grow because suddenly we need a new layer of AI people or they shrink because we've said we can disintermediate and now the business people are their own technology experts or, or somewhere in between. What do you see as being kind of the future of this function? I, I'm hopeful that it can shrink. And we build our tools and we make them better over time. Right. So, you know, if I think back 30 years and I was programming and see it was easy for me to make lots of mistakes, right? I could pass the wrong type into a function and it wouldn't complain. Now I get a nice compile time error said you made a mistake. Here's how to fix it. And so I think we're building tools that can do a better job of, of help helping us along the way. And I think with AI, it'll be possible to do that at kind of the business level as well as just the compiler of code level. So that suggests that, that maybe there'll be less of this, right? That, you know, and, you know, and I hear this at Google. You know, I hear from my friends who've been here for decades saying, oh, you know, it's so much more complicated now. Used to be we just launched a product and now we have to go through the security review and the privacy review and these five other reviews and it takes so long. And, you know, one of my reactions is, well, well, how much longer does it take? And they say, you know, like five times longer, it's terrible. And then I'd say, well, you know, how many more users do you reach on day one? They say, oh, well, like a hundred times more users. So I'm saying, well, so you're saying it's 20 times more efficient per user. And I think that's one way of looking at it. And I think we can get to the point where maybe the AI is automating more of this, right, where there's just sort of company-wide policies and you write code and it works with those policies rather than having to have human teams or reviewing everything. Right. So there's potentially you can get a lot more done. You can be a lot more efficient. And I want to continue down that path again, because it's interesting to me that there is still a happy story and a concerning story. The happy story is, you know, we're more efficient than ever. We're able to do more with less. The concerning story is, and you know, you touched on this earlier, Peter, but if you're Joe technologist or Jane computer scientist, well, maybe, maybe the enterprise just doesn't need you anymore because they're they're running a leaner function. And so if you're someone who finds themselves in this computer science field and is worried about being a winner in this space versus being disrupted, let's say, what's kind of your guidance for these people to make sure that they're that they're still relevant and able to, you know, continue to be, you know, valued, you know, members of the team. Yeah, I guess. What you want to do to be valued is to understand what the goals are of your organization and fit into that and use your knowledge to do that. And I think we had this period where, you know, people were told, well, if you become a computer science major, then you automatically get a good high paying job because there's this scarcity of the ability to write code that will compile. And, and now maybe that's not the scarcity. And instead, the scarcity is to be able to understand the business need, understand the environment in which you're operating and design something that will solve that need. I like that framing of it. And I think, I think you and I are fairly aligned there. There's another, you know, there's another idea that's been floating around. And I'm curious what you think of it, because we've we're having this conversation in the context of one organization, one enterprise. There's this sort of seductive idea of AI being able to enable, you know, a one person billion dollar shop now. And whether we take it to that extreme, or we say, you know, if you're a computer scientist, you can now be an army of one, you don't have to work for, you know, Google or, you know, name your big tech company. You could be working for yourself and, you know, now have the tools in your toolkits to do an awful lot more. Do you buy that future? Is that aligned with your view of, you know, more, more flexible or more, um, you know, kind of bit work? Or is, is that kind of fanciful thinking? Uh, certainly one person can do a lot more. Uh, but there's a lot to get done that, uh, one person just doesn't know. And it's not clear to me if the AI is going to know all those things. Right. So, you know, why do companies have thousands or hundreds of thousands of people? It's because they're all these exceptions, right? And you got to operate in all these countries and they have different rules and so on. Um, so I remember, uh, Google, but this, uh, airline travel. Company called ITA, uh, and they, uh, you know, I was sort of doing due diligence on that and one of the guys on in the company was a colleague I'd worked with before, I remember him telling me, look, I'll talk honestly, Google's got a bunch of smart programmers, uh, you could build what we build, but it's going to take you a really long time. You'd get 90% of it done right away. And then there's all these exceptions, right? So, you know, there's all these weird time zones and there's all these weird fees and, uh, and regulations that different airlines have and different countries have and so on. And we've done all that and, and it's not written down anywhere. You have to discover it. So, uh, you know, the value of our company is that we've done this long tail of exceptions and we figured it out and there's no easy way to figure that out. Uh, and I think there's a lot of that going on. And so it's going to be easy for one person to write something that will solve the easy part of the problem. Uh, the rest of it, you know, maybe some of it can get done by AI if it's written down somewhere, but a lot of it isn't written down anywhere. And, you know, I think we're still a generation away. Right. So you could imagine that could be solved by an agent that goes out and discovers all these exceptions by taking actions in the world. We don't have anything close to that yet. So I think, you know, for the next number of years, uh, there's still going to be value in having a lot of people that know a lot of these, uh, sort of exceptional things. Right. No, I really like that is it sounds like it's basically a parable for the current state of anyone working with AI in an enterprise capacity is sure it can get done 90% but there's an awful lot of asterisks that you still need. All these people to do. Yeah. And I remember, uh, I just reread, uh, Joel Spolsky's article on, on why you shouldn't rewrite your software. Right. Cause he says, you know, you look at it and you say, uh, here's this mess of stuff that seems obsolete and not worth it. And 90% of it is not worth it. And you could throw out, but 10% of it solves a problem that you don't understand and you've never seen before, but it's still a problem. And you don't know which, which of that 10% is important and which 90% is junk. Oh, that makes it, it makes complete sense. It's, it's interesting and it certainly resonates with me. Peter, I wanted to talk about something completely different, but you know, while I have you, I wanted to ask you, you know, I've heard kind of through the grapevine that, that her is your favorite movie about AI. And, you know, it's certainly a movie that, I love the movie as well. And it's, it's a movie that when I first saw, I remember walking out of the theater and just like, it's just like, I need to think a lot more about that. And certainly it seems like the world has moved in that direction. Do you see this as being a reality that we're in right now or we're about to be in right now? And, you know, what, do you worry that, do you worry that tech companies are seeing something like that and saying, yeah, we should, we should build that versus taking it as a cautionary tale? I mean, I, I think yes, right. So, and, you know, anytime somebody writes a cautionary tale, there'll be some tech leaders that says, yeah, I want that. And I think we already have that, right? So there's lots of people whose boyfriend or girlfriend is a computer that they chat with. And so we're already there to some extent. I don't know if hers is my favorite movie, but you might have seen, right? So I did this event where we had a screening of her and then we had a Q&A afterwards. And I remember one of the questions was what other science fiction movie does this remind you about? And my answer was, well, not a science fiction movie, but it really reminds me of Life of Brian, because both of them are about faith. Right. So in Life of Brian, here's this schmuck and everybody wants to believe he's a Messiah. And in her, here's this piece of software and the protagonist wants to believe this is my girlfriend. And I think we're just built that way. Right. So humans want to do that. Right. And so, you know, talk about can we build this interactive entity that people will think of as a, as a real person and companion? Well, we did that decades ago. My daughter loved her teddy bear and it was not very interactive. And yet she loved it completely. And so we just want to, you know, sort of inject our feelings onto, onto other people, onto other machines, onto other pieces of software. And I think that's just the way humans are built. And I think her touched on some of that. Yeah. So it's in our nature. It's not something we can avoid or, you know, there's something to be done about. It's just something we need to understand and, you know, live with it sounds like. Yeah. I mean, I think we, we do want to worry that, you know, there's lots of things that can give you pleasure and moderation and can lead to bad results when you're overly addicted to them. And you can see this as certainly being in that range. Peter, I wanted to say a big thank you for being on the program today. This has been really interesting and really insightful. And I really appreciate your time. Yeah. Great talking with you, Jeff. Thank you. If you work in IT, InfoTech Research Group is a name you need to know. No matter what your needs are, InfoTech has you covered. AI strategy, covered. Disaster recovery, covered. Vendor negotiation, covered. InfoTech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe.