AI Boom or Bust? AI Boomers and Doomers Reveal Their Predictions for Our Future
75 min
•Dec 29, 20254 months agoSummary
This episode presents opposing perspectives on AI's future, featuring both doomers predicting existential risk and accelerationists believing AI will solve humanity's greatest challenges. The debate covers AGI timelines, safety concerns, economic consolidation, and whether current generative AI capabilities justify the hype or represent technological theater.
Insights
- AI safety and control remain unsolved problems despite massive investment, with experts disagreeing fundamentally on existential risk probability (ranging from near-zero to 99.99%)
- Generative AI capabilities are overstated relative to actual use cases; most applications are limited to text generation, summarization, and coding assistance with inherent probabilistic limitations
- Economic consolidation in AI mirrors previous tech booms, with $100B+ training costs creating barriers that favor existing tech giants (Google, Microsoft, Meta) over startups
- The real near-term risk is not rogue AI but misuse by bad actors and geopolitical arms races driven by fear and competitive pressure rather than safety considerations
- Organizations must adopt radical thinking about AI transformation while maintaining practical implementation, focusing on augmenting human expertise rather than replacing it
Trends
Shift from AGI skepticism to explicit super-intelligence development goals among major AI labsGrowing recognition that generative AI hype exceeds actual capabilities, creating backlash against over-promising vendorsIncreasing focus on AI governance, certification, and professional standards as safety mechanismsConsolidation of AI power among mega-cap tech companies due to capital and data requirementsEmergence of 'accelerationist' vs 'doomer' camps as dominant narrative frames in AI discourseRecognition that AI arms races are game-theoretically equivalent to prisoner's dilemmas with no winnersShift toward viewing AI as augmentation tool for skilled workers rather than autonomous replacementGrowing concern about AI-enabled autonomous weapons and their real-world deployment in 2024Emphasis on expectation management and human-in-the-loop systems for enterprise AI deploymentRecognition that innovation requires tolerance for failure and waste, not just predictable ROI
Topics
Artificial General Intelligence (AGI) definitions and timelinesAI existential risk and safety concernsGenerative AI capabilities and limitationsLarge language models and synthetic text generationAI arms races and geopolitical competitionEconomic consolidation in AI industryAI regulation and professional certificationAutonomous weapons and AI-enabled killing systemsEnterprise AI implementation and change managementAI-driven disintermediation across industriesHuman-AI collaboration and augmentationAI bias, manipulation, and misinformationData monopolies and competitive advantageAI infrastructure and deployment challengesFuture governance models and social contracts
Companies
OpenAI
Major AI lab that raised ~$100B collectively with Anthropic and xAI; discussed as leader in generative AI and ChatGPT...
Google
Dominant tech company with massive AI investments; discussed regarding data monopoly, antitrust concerns, and competi...
Microsoft
Major investor in OpenAI with $100B+ cash reserves; discussed as one of few companies capable of funding massive AI t...
Anthropic
AI safety-focused lab that raised billions; Dario Amodei cited estimating $100B needed for training by 2027
Meta
Tech giant investing heavily in AI; discussed as potential next Google in AI and facing disintermediation risks
xAI
Elon Musk's AI company; mentioned as one of three labs collectively raising ~$100B in AI funding
Infosys
IT research group mentioned as sponsor providing AI strategy, disaster recovery, and vendor negotiation support
People
Geoff Nielson
Host of Digital Disruption podcast; frames debate between AI optimists and pessimists across 40+ episodes
Eliezer Yudkowsky
AI safety researcher cited as early voice warning about existential AI risk; predicted 99.99% extinction probability
Sam Altman
OpenAI CEO; cited for redefining AGI as systems capable of doing 95% of human jobs
Dario Amodei
Anthropic CEO; estimated $100B training costs needed by 2027 for next-generation models
Kai-Fu Lee
Technology expert and author; discussed AI arms race, capitalism's role, and future governance models
Rumman Chowdhury
AI researcher; discussed generative AI limitations, synthetic text risks, and lack of beneficial use cases
Ben Goertzel
AI researcher; discussed AGI definitions, super-intelligence concepts, and rapid self-improvement potential
Trenton Bricken
Technology expert; discussed AI augmentation, enterprise implementation, and innovation tolerance
Oppenheimer
Historical figure cited as example of scientist building destructive technology due to fear of competitors
Eric Schmidt
Former Google CEO; cited as shifting position from opposing open models to accepting their inevitability
Einstein
Historical figure; mentioned informing US about German nuclear bomb development during WWII
Quotes
"Intelligence is not inherently good or inherently bad. You apply it for good and you get total abundance. You apply it for evil and you destroy all of us."
Kai-Fu Lee•Early in episode
"We have no idea how to control super intelligence systems. So given those two ingredients, the conclusion is pretty logical. You basically asking what is the chance we can create a perpetual safety machine, perpetual motion device by analogy."
Eliezer Yudkowsky•Discussion of extinction risk
"Does the Salami understand? Will the Salami help us make better decisions? It's absurd."
Rumman Chowdhury•On renaming AI to expose marketing hype
"Unless you are radical with your thinking you will not be ready for the disruptions that are going to come."
Trenton Bricken•Enterprise transformation discussion
"We are all pioneers right now, whether we want to be or not."
Trenton Bricken•On navigating AI transformation
Full Transcript
Approaching this singularity. Keeps getting more and more intense. We have no idea how to control superinduction systems. Is it going to be existential and evil or is it going to be good for humanity? It could be the best thing that ever happened to us. The technology is moving faster than any other sector. The algorithm is creating a feedback loop. This is a good thing, and the laser-eyed robots aren't going to beat us into submission. Intelligence is not inherently good or inherently bad. You apply it for good and you get total abundance. You apply it for evil and you destroy all of us. Hey, everyone. We've got something really special for you today. When we started digital disruption, we wanted to put a focused lens on the technology shaping our shared future. And over more than 40 episodes, we've done that by talking to an eclectic collection of the world's foremost experts on technology, leadership, and social progress. From predictions about the next renaissance of human enlightenment to the sci-fi-esque advancements literally putting computer chips and people's brains to the digital horrors lurking in the dark and distant corners of our online world, these guests brought forward their best predictions and what the next decade holds. We covered a lot, but there was one unescapable topic. AI. AI. Everything is AI. AI. Generative AI. Transformative AI. Generative AI. Generative AI dominated the conversation. But without any consensus, AI could be our savior or our enslaver. It could herald a golden era of human advancement or the end of the human race. Or it's all a technological sham dressed up in fancy marketing terms, lots of fluff, and no substance. If there was one thing everyone could agree on during our first season, it's that nobody agreed on anything. And so we thought we'd put the most thought-provoking ideas we heard this year head to head. So you can decide for yourself what you believe comes next. Let's jump in. One of the predictions you've made lately that's kind of made the rounds is that your prediction of an extinction level event for humans created by AI in the next 100 years. You're putting at 99.99%. Is that right? Am I missing a couple of nines there? I keep adding nines. I keep meeting people over different p-dumes for reasons independent of mine. So every time this happens, an Ava 9 has to be added logically. But it seems to be that you have to follow the chain of assumptions to get to that number. One is it looks like we're creating AGI and then quickly after super intelligence. A lot of resources are going into it. Prediction markets stop experts are saying we're just a few years away. Some say two years, five years, but they all kind of agree on that. At the same time, according to my research and no one has contradicted that we have no idea how to control super intelligence systems. So given those two ingredients, the conclusion is pretty logical. You basically asking what is the chance we can create a perpetual safety machine, perpetual motion device by analogy. And the chances of it are close to zero. If you study history, one of the great things that you learn is that the world gets better all the time. It is very hard to read a bunch of history books and arrive at any other conclusion than today is the best day ever to be born. And in fact, the beauty of the human experience and the reason that arguing our humans are good or bad is so easy is because if humans were actually bad, we never would have arrived here after we crawled out of caves. The fact that we have all the things that we have, the fact that the world is as safe as it is for most people. And that it gets safe for all the time is a testament to the fact that we are just building a more aligned earth and a more aligned human experience. And that's not to say we aren't fallible and that we don't have lots of problems. But it is to say that our problems are diminishing. And so I don't think the onus is actually on me to prove that the world is going to get better. I actually think the onus is on someone else to say this is the peak of civilization. When you really think about it, a lot of people when they look at technology, they think of this current moment as a singularity where we are really not very certain of what's about to happen. I, you know, is it going to be existential and evil or is it going to be good for humanity? I unfortunately believe it's going to be both just in chronological order if you think about it. And, you know, you mentioned that we have all of those challenges around geopolitics, about climate, about economics, and so on. And I actually think all of them is one problem. It's just it really is the result of a systemic bias of pushing capitalism all the way to where we are right now. And when you really think about it, none of our challenges are caused by the economic systems that we create or the war machines that we create, and similarly, not with the AI that we create. It's just that humanity. I think at this moment in time is choosing to use those things for the benefit of a few at the expense of many. I think this is where we stand today. I think AI is an incredible technology. The obviously the internet has changed the society and profound ways. But some of the overpromise almost feeds the other sides, skepticism, and like AI is not going to help some scientists help cure cancer. But AI isn't not in quote, it's going to cure cancer, at least anytime soon. One big difference is the money. So when I first started writing about tech, I was always interested in the venture capitalists and the startups and that whole ecosystem, like this idea, like our idea for a company is either going to work, be worth back then, tens of millions, hundreds of millions, now billions, if not a trillion, or it's going to be worth nothing. And the venture capitalists who are staking back then millions, now tens of millions, hundreds of millions, billions. But in 1995, venture capital was under $10 billion a year. By 2021, it was over $300 billion a year. Roughly $130 billion went into AI startups last year. I mean, a lot of it went into a few, like an anthropic open AI, XII, as Alon Muss. They raised collectively tens of billions of dollars, almost $100 billion just between those three. But there's still a lot more money going to AI startups. So the money has really changed. I guess the final difference is, when the internet came out, maybe the biggest criticism was around the attention span. Oh, if you're always online, this instant gratification, what was it going to do for consumerism and our society? There was much more of a worry, much more of a backlash. People didn't greet open AI with open arms the way they did. The internet, people are fearful of it. We could talk about that. I think it's kind of Hollywood-induced fear. I don't think the media has done such a great job with AI. So AI kind of has a double battle. There's the usual battle of creating a startup and trying to cash in. But the second battle of trying to convince people that this is a good thing and the laser-eyed robots aren't going to feed us into submission. I think intelligence is a much more lethal superpower than nuclear power, if you ask me. Even though it has no polarity, just so that we're clear, intelligence is not inherently good or inherently bad. You apply it for good and you get total abundance. You apply it for evil and you destroy all of us. But now we're in a place where we're in an arms race for intelligence supremacy in a way where it doesn't take the benefit of humanity at large into consideration but takes the benefit of a few. And in my mind, that will lead to a short-term dystopia before what I normally refer to as the second dilemma, which I predict is 12 to 15 years away. And then a total abundance. And I think if we don't wake up to this, even though it's not going to be the existential risk that humanity speaks about, it's going to be a lot of pain for a lot of people. My favorite subject to cover as a journalist is a debate. If it's something very attractive to me about trying to understand, in good faith, why intelligent people come to such different conclusions when looking at the same material? And I had known that there was a contingence inside of the world of artificial intelligence that was really, really worried about it for many years. Like, Elias Erudkowski podcast interviews in 2013 or something is when I first realized that there was this almost like biblical profit voice out there saying that the sci-fi movies are kind of true. And we really need to get ready. We need to get prepared for this. And after Chad GPT blew up, I started to increasingly run into essentially the opposite side of that debate, which are these people who often call the accelerationists who believe that AGI, this artificial general intelligence point that they believe is coming, it could be the best thing that ever happened to us. And so I was attracted right away to the people who have those strongly opposing views inside the same world. AGI itself is not a coherent set of technologies. It is a marketing term and has been from the beginning, from the initial convening in 1956, in which Dom McCarthy and Marvin Miskeen invited a bunch of folks to a dark milk college to have a discussion around a quarter-quote thinking machines. So that's one part of it. The second part of it is that the current era of AI, the generative AI tools, including large language models and the fusion models, really are premised on this idea that there is a thinking mind behind that. So in the case of large language models, especially when they are used as synthetic text extruding machines, we experience language, and then we are very quick to interpret that language and the way we interpret it involves imagining a mind behind the text. And we have these systems that can output plausible looking text and just about any topic. And so it looks like we have nearly their solutions to all kinds of technological needs in society. But it's all fake and we should not be putting any credence into it. I think that's so interesting and I'm absolutely of the same mind, by the way. And I found myself laughing when I was kind of reading through your book. You know, one of the thing, first of all, artificial intelligence, I completely agree. Like, first of all, I do have to give credit because it is great marketing. Like it just, it's so evocative of something. But, you know, nobody can really seem to define exactly what that is. And of course, it has all these ideas and can be used for any purpose. But one of the things you do early on in the book is you kind of just pop that balloon by saying, well, you know, what if it wasn't called artificial intelligence? Can you share a little bit about, you know, what that sounds like? And, you know, why you encourage people to do that? Yeah, so we have a few fun alternatives that we call on. Early on in our podcast, Alex Coind, Matthew Maths, as a fun one. And there's also due to the Italian researcher Stefano Kinturelli-Salami, which is an acronym for systematic approaches to learning algorithms and machine inference. And the fun thing about that is if you take the phrase artificial intelligence in a sentence like, you know, does AI understand? Or can AI help us make better decisions? And you replace it with Matthew Math or Salami. It's immediately obvious how ridiculous it is. You know, does the Salami understand? Well, the Salami help us make better decisions? It's, you know, it's absurd. And just sort of putting that little flag in there, I think, is a really good reminder. If you look at what what generative AI was meant to be in large language, models were meant to stand for, they kind of always set up the fail. They were meant to be this panacea of, we're going to be the future of consumer software. We're going to be the thing that kind of, kind of, restarts growth in software as a service. As I'm sure you will know, software as a service has been slowing since 2021. Actually, kind of before that, I've been honest. People have been freaking out for several years before COVID, in fact. But generative AI was meant to be this thing. You plug it into anything and it just creates new revenue. The problem is that generative AI and large language models are inherently limited by the probabilistic nature of these models. What they can actually do is they can generate, they can summarize. You can put hats on a hat, you can say, oh, they can do some coding things, but that's really what they can do. And they have reached a point where they're not what they can't learn, because they have no consciousness. So what they can actually do as products is very limited. It's very limited indeed, because what people want them to do is they want them to create units of work. They want to create entire software programs. You can't really do that. Oh, can you create some code? You can create some code. But if you don't know how to code, do you really want to trust this? You've probably done. So inherently, you've got all of this hundreds of billions of dollars of the topics being built to propagate large language models that don't have the demand and don't have the capabilities to actually justify any of it. I wanted to ask the two of you a slightly different question. So one of the things I normally ask, you know, guess that I speak to here is, you know, I ask them what they think is bullshit. And I'm not going to ask the two of you that because I think we've spent, we spent quite enough time talking about, you know, what is bullshit? And I, you know, I know we've got some, some strong and, you know, well supported views here. I wanted to flip the question around and ask, you know, in this fear, what isn't bullshit? What are you excited about? I'm a technologist just like Alex's. I run a professional master's program in computational linguistics, training people had to build language technologies. So I definitely think there are good use cases for things like language technology and the thing you can media example is wonderful. But I see no beneficial use case of synthetic text. And I actually looked into this from a research perspective. I have a talk called chat G.P.Y. One if ever is synthetic text safe, desirable and appropriate. Or those adjectives in some order, I don't remember the exact title. And basically it has to be a situation where first of all, you have created the synthetic text extruding machine ethically. So without environmental ruin, without labor exploitation, without data theft. We don't have that, but assuming that we did, you would still need to meet further criteria. So it has to be a situation where you either don't care about the veracity of the output. Or it's one where you can check it more efficiently than just writing the thing in the first place yourself. It has to be a situation where you don't care about originality. Because the way the systems are set up, you are not linked back to the source where an idea came from. And then thirdly, it has to be a situation where you can effectively and efficiently identify and mitigate any of the biases that are coming out. And I tried to find something that would fit those categories. And I don't. So certainly language technology is useful. Other kinds of well-scoped technology where it makes sense to go from X input to Y output and you've evaluated it in your local situation. If you work in IT, InfoTech Research Group is a name you need to know. No matter what your needs are, InfoTech has you covered. AI strategy covered disaster recovery covered vendor negotiation covered. InfoTech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe. The challenge is AI is here to magnify everything that is humanity today. Right? So that magnification is going to basically affect the four categories if you want. Normally what I call killing, spying, gambling and selling. So these are really the categories where most AI investments are going. And of course we call them different names. We call them defense. It's just to defend our homeland. When in reality it's never been in the homeland. It's always been in other places in the world killing innocent people. Now if you double down on defense and on offense and enable it with artificial intelligence, then scenarios like what you see in science fiction movies of robots walking the streets and killing innocent people not only are going to happen. They already happened in the 2024 wars of the Middle East sadly. They did not look like humanoid robots which a lot of people miss out on. But the truth is that very highly targeted AI enabled autonomous killing is already upon us. Right? And so the timeline is, let me start from what I predicted in scary smart. So when I wrote scary smart and published it in 2021, I predicted what I called at the time. I called it the first inevitable. Now I like to refer to it as the first dilemma. The first dilemma is we've created because of capitalism, not because of the technology. We've created a simple prisoners dilemma really where anyone who is interested in their position of wealth or power knows that if they don't lead in AI and their competitor leads, they will end up losing their position of privilege. And so the result of that is that there is an escalating arms race. It's not even a cold war as per say. It is truly a very, very vicious development cycle where America doesn't want to lose to China. China doesn't want to lose to America. So they're both trying to lead. Google doesn't want to lose or alphabet doesn't want to lose to open AI and vice versa. And so basically this first dilemma, if you want, is what's leading us to where we are right now, which is an arms race to intelligence to privacy. It's game theoretically equivalent, I think, to a prisoner's dilemma. Individual interest is different from communal interest. So everyone developing this wants to be the most advanced lab with the best model and then government forces everyone to stop and they forever like him, the economic advantage. The reality is it's a race to the bottom, no one's going to win. So if we can do a much better job coordinating, collaborating on this, there is a small possibility that we can do better than where we're heading right now. The challenge, you know, in my book alive, I write the book with an AI. So I'm writing together with an AI, not asking an AI and then copy paste what it tells me, we're actually debating things together. And one of the questions I asked, I, you know, I, she called herself, Trixie, I give her a very interesting persona that basically the readers can, can relate to. And I asked Trixie and I said, what would make a scientist? Because, you know, I left Google in 2018 and I attempted to tell the world this is not going in the right direction. You know, I asked, I asked Trixie, I said, what would make a scientist invest their effort and intelligence in building something that they suspect might hurt humanity? And she, you know, mentioned a few reasons, compartmentalization and, you know, ego and I want to be first and so on. But then she said, but the biggest reason is fear. Fear that someone else will do it and that you'll be in a disadvantage position. So I said, give me examples of that. Of course, the example is Oppenheimer. So she said, you know, I, so I said, what would make Oppenheimer as a scientist build something that he knows is actually designed to kill millions of people. And she said, well, because the Germans were building a nuclear bomb. And I said, where are they? And they, and then she said, yeah, when Einstein moved from Germany to the US, he informed the US administration of this, this and that. So I said, and I quote it's in the book openly, I said, and a very interesting part of that book is I don't edit what Trixie says. I just copy it is exactly as it is. I said, Trixie, can you please read history in English, German, Russian and Japanese? And tell me if the Germans were actually developing a nuclear bomb at the time of the Manhattan project. And she responded and said, no, exclamation mark. They started and then stopped three and a half months later or something like that. So, so you see the idea of fear takes away reason. Well, basically, we could have lived in a world that never had nuclear bombs. If we actually listened to reason that, you know, the enemy attempted to start doing it, they stopped doing it. We might as well not be so destructive, but the problem with humanity, especially those in power, is that when America made a nuclear bomb, it used it. And I think this is the result of our current first, first dilemma, basically. You know, it's interesting. One of the parallels that gets thrown around a decent amount, and I'm certainly guilty of this, is talking about the AI risk in comparison to the nuclear risk that we created in the first half of the 20th century and continues to exist now. If I look at the nuclear risk, I hate to use the word optimist in relation to nuclear risk, but the optimist in me says like, hey, we deployed nuclear bombs. There was mass casualties, but we didn't destroy the world. We were able to collectively say, okay, like that's far enough. We're going to put treaties in place, and we've stepped back from the precipice at least so far and averted kind of, you know, extinction level events with nuclear war. Does that, is that something that can be applied to AI, or is there a reason that makes this time fundamentally different? So nuclear weapons are still tools, a human being decided to deploy them, a group of people actually develop them and use them. So it's very different. We're again talking about paradigm shift tools to agents. At the time, we used 100% of nuclear weapons we had. That's why we didn't blow up the planet. If we had more of them, we probably won't. So it doesn't look good. The treaties develop. They all really failed because many new countries have now acquired nuclear weapons. They are much more powerful than we had back at the World War II era. So I think it's not a great analogy. The result of the current first dilemma is that sooner or later, whether it's China or America or some criminal organization, you know, developing what I normally refer to as ACI, artificial criminally intelligence, not worrying themselves about any of the other commercial benefits other than really breaking through security and doing something evil. However, of them wins, they're going to use it. And accordingly, it seems to me that the dystopia has already begun. And I need to say this because maybe your listeners don't know me. So I need to be very clear about my intentions here. One of the early sections in alive, the book I'm writing with Trixi, I write a couple of pages that I call late stage diagnosis. And I attempt to explain to people that I really am not trying to fear longer. I'm really not trying to worry people. You know, consider me someone who sees something in an X-ray, right? And as a physician, he has the responsibility to tell the patient this doesn't look good, right? Because believe it or not, a late stage diagnosis is not a death sentence. It's just an invitation to change your lifestyle, to take some medicines, to do things differently. And many people who are in late stage recover and thrive. And I think our world is in a late stage diagnosis. And this is not because of artificial intelligence. There is nothing inherently wrong with intelligence. There is nothing inherently wrong with artificial intelligence. Intelligence is a force without polarity, right? There is a lot wrong with the morality of humanity at the age of the rise of the machines. Now, so this is where I be where I have the prediction that this topia has already started, right? Simply because symptoms of it we've seen in 2024 already, right? That dystopia escalates. Hopefully we would come to a treaty of some sort halfway, right? But it will escalate until what I normally refer to as the second dilemma takes place. And the second dilemma drives from the first dilemma. If we're aiming for intelligence supremacy, then whoever achieves any advancements in artificial intelligence is likely to deploy them. Think of it as if a low firm starts to use AI, other low firms can either choose to use AI to or they'll become irrelevant. And so if you think of that, then you can also expect that every general who deploys who expects to have an advancement in war gaming or autonomous weapons or whatever are going to deploy that. And as a result, their opposition is going to deploy AI to and those who don't deploy AI will become irrelevant. They'll have to side with one of the sides. When that happens, I call that the second dilemma. When that happens, we basically hand over entirely to AI. And human decisions are taken out of the equation. Simply because if war gaming and missile control on one side is held by an AI, the other cannot actually respond without AI. So generals are taken outside out of the equation. And while most people, you know, influenced by science fiction movies believe that this is the moment of existential risk for humanity, I actually believe this is going to be the moment of our salvation. Because most issues that humanity faces today is not the result of abundant intelligence. It's the result of stupidity. If you look at the curve of intelligence, if you want, there is that point at which the more intelligent you become, the more positive you have an impact on the world. Until one certain point where you're intelligent enough to become a politician or a corporate reader. But you're not intelligent enough to talk to your enemy. And when that happens, that's when the impact dips to negative. And that's the actual reason why we are in so much pain in the world today. But if you continue that curve, intelligence superior intelligence by definition is altruistic. As a matter of fact, this is in my writing, I explain that as a property of physics, if you want. Because if you really understand how the universe works, everything we know is the result of entropy. The arrow of time is the result of entropy, the current universe in its current form is the result of entropy. entropy is the tendency of the universe to break down, to move from order to chaos, if you want. That's the design of the universe. The role of intelligence is that in that universe is to bring order back to the chaos. And the most intelligent of all that try to bring that order, try to do it in the most efficient way. And the most efficient way does not involve waste of resources, waste of lives, escalation of conflicts, consequences that lead to further conflicts in the future and so on and so forth. And so in my mind, when we completely hand over to AI, which in my assessment is going to be five to seven years, maybe 12 years at most, right? There will be one general that will tell, you know, it's his AI army to go and kill a million people and the AI will go like, why are you so stupid? Like why? I can talk to the other AI in a microsecond and save everyone, all of that, you know, madness, right? This is very anti-capitalist. And so I sometimes when I warn about this, I worry that the capitalist will hear me and change their tactics, right? But in reality, it is inevitable even if they do. It's inevitable that, you know, we'll hit the second dilemma where everyone will have to go to AI, right? And it's inevitable. I call it trusting intelligence that section of the book. It's inevitable that when we hand over to a superior intelligence, it will not behave as stupidly as we do. My prediction is that just like the first Renaissance evolved our understanding about the social contract could look like introduced enlightenment thought, which led to, among other things, democracy and new forms of government, my prediction is that the next 50 to 100 years outside of novel sciences will see the biggest changes in understanding of social contract. And, you know, Sam has talked about this. A lot of people have talked about this in a world where work becomes less critical to actually running society where value creation gets less expensive. And so, redefining how society should work is going to require a bunch of people to think about it and a bunch of like quite honest conflict within governments to redefine those things. And so if you're looking at 50 to 100 years out, my bold prediction is new government, like it's truly like democracy may not be the final state. And we're probably destined for something. And by the way, I'm a free marketer capitalism might not even be the free market solution. And I know yet because imagining these things just will require major updates to how we understand the universe to work. And overcoming that conflict is going to is going to take a lot of work. Now, the one other thing I'll say and we can come back to this or you can expand on this. When people ask me what's the next great conflict, I don't think it's between two nations. I really don't. I think we have reached this sort of this flat earth point where it's like really not in any nation's interest, especially nuclear equip nations to fight. And a hot war would just be like so untenable, you know, we would just I don't think anyone wants it. I do think that there is a future conflict between people and the state. I think there's a world we wake up in 20, 30, 40 years. And we go, oh, we have all the things that the state has been promising us. It's just not the state that delivered it. Right. It's technology. And that's going to be one of these moments where people go, I wonder why I'm paying 50% taxes to a body that doesn't actually produce value anymore. And so there's a whole other thing there, which is like this introduces this idea of a new form of government. I think we get there. Because a lot of people are going to be like wait a second, why are we being governed in such a way that like it doesn't allow that, you know, the technology to serve us. One of my fears around AI, it's nothing to do with laser-eyed robots or anything like that. It's the consolidation in the hands of the same few tech companies that have been dominant for the last decade or two. You know, it's funny. So I started this book right at the end of 2022, started 2023. And I went in search of the next Google, the next meta. And you know, I ended up concluding like I fear that the next Google is Google in AI and the next meta is meta. You know, this stuff is really expensive. When I first started, you know, people were talking about millions, tens of millions to train, fine tune and operate these chatbos, large language models, whatever you want to call them. And the same with tech to video audio to a text to audio. By the time I was done reporting at the end of 2024, 2024, it was hundreds of millions of not billions. And Dario Moda for from Anthropic, they do quad the chatbot, cloud. You know, he's estimating that they're going to need $100 billion by 2027 to train these things. And so who has that kind of money, you know, Google Microsoft, they have 100 billion or so laying around in cash. But if you have to raise $100 billion, or even if it's only, you know, $3, $5, $10 billion, well, a large venture capital outfit in Silicon Valley has $1 billion, all told in a fund. And so we're talking about billions. And so that's one way this is weighted to big tech. And the other is data. Right, this is really central to the remedies that governments now talking to remedies for the Google antitrust trial, you know, a court at Jeff Fettel judge found that, you know, Google was a monopolist that abused its power. So now what should we do? And a lot of the discussion, I think rightfully, is around the data, you know, open AI approach Google and said, hey, can we lease week, can we kind of buy access to your data? And they said, no, and that's a huge advantage. Who are the people who are organizations, I guess, that need to be on their toes in this kind of changing world? I think studios should be very worried. I think anyone who's been an intermediary for the long period of time, who's sort of been responsible for the financing or like the middleman deals, should be very concerned. And that's not just the case in Hollywood. I think that's the case across every industry. It's being disintermediated by these technologies. And it's making everything cheaper and more easily accessible. So I don't know what this studios are going to do. I hope that they become really good at curating because we are going to have a problem with noise as a result of these tools making everything cheaper and faster. And then we'll all be able to do something that we can do. And then we'll have a lot of people who are going to be able to do something that we can do. And then we'll have a lot of people who are going to be able to do something that we can do. And then we'll have a lot of people who are going to be able to do something that we can do. I'm thinking about YouTube, right? Because YouTube is like, you know, this sea changing platform. Are the tech companies becoming too powerful here? Like, is there, I don't know, is there a dystopian risk? What do you see the changing role of, you know, the technology companies who like own the platforms as here? It's such a complicated question. And my answer will probably be a little vague. I think it's both ends. I absolutely see the dystopian version of all this. We're already living in it. Right? I mean, we're all addicted to our smart problems. And these social media apps that are designed to keep our attention for as long as possible. The algorithm is creating a feedback loop around the types of content people want. And that is also informing what concept creators are making. And in many ways, you're seeing this, I think, sort of race to the bottom. Both in content and in storytelling. Of course, there's good stuff out there. I don't want to say everything is bad. There are plenty of really inspiring creators doing amazing things. But there's also just, you know, there's, there are now hundreds of thousands of creators dedicated to teaching other people how to grab attention. You know, how to get someone to click on your video and and stay day on you for longer than three seconds. And they're, they're boiling this down into a silence. I think in the short term, for as long as the age of augmented intelligence is upon us, those who cooperate fully with AI and master it are going to be winners. It's absolutely no doubt about that. Right? And also those who excel in the rare skill of human connection will be winners. Right? Because I can sort of almost foresee an immediate knee jerk reaction to let's hand over everything to AI. Right? You know, I think the greatest example is called centers where, you know, I get really frustrated when I get an AI on a call center. It's almost like your organization is telling me they don't care enough. Right? And and, and you know, the idea here is I'm not underestimating the value that an AI brings, but one, they're not good enough yet. Right? And two, shouldn't I have, I mean, I wish you had realized that AI can do all of the mundane tasks that made your call center agent frustrated so that the call center agent is actually nice to me. Right? So, so in the short term, I believe those who there are three winners. One is the is the one that cooperates fully with AI. The second is the one that, you know, basically understands human skills, right? And human connection on every front, by the way, as AI replaces love and, you know, tries to approach loneliness and so on. The ones that will actually go out and meet girls were going to be nicer. Right? They're going to be more, you know, attacked if you want. And then finally, I think the ones that can parse out the truth. Right? So, so one of the one of the sections I wrote so far published so far in our lives is a section that I called the age of mind manipulation. And you'll be surprised that perhaps the skill that AI has acquired most in the in its early years was to manipulate human minds through social media. And so, and so my feeling is that there is a lot that you see today that is not true. Okay? That's not just fake videos, which is, you know, the flamboyant example of deep fake. There is a lot that you see today that is not true that comes into things like the bias of your feet. Right? If you're if you're pro one side or another of a conflict, the AI of the internet will make you think that your view is the only right view that everyone agrees. Right? Yeah, if you're a flat-ercer, everyone. Like if someone tells you, but is there any possibility it's not flat, you'll say, come on, everyone on the internet is talking about it. Right? And I and I think the very, very, very eye opening difference, which most people don't recognize is, you know, I've had the privilege of starting half of Google's businesses worldwide. And, and you know, got the internet and e-commerce and Google to around 4 billion people. And in Google that wasn't a question of opening a sales office that was really a deep question of engineering where you build a product that understands the internet that improves the quality of the internet to the point where Bangladesh is have access to democracy of information. That's a massive contribution. Right? The thing is, if you had asked Google at any point in time until today, any question, Google would have responded to you with a million possible answers in terms of links and said, go make up your mind what you think is true. Right? If you ask Chagy PT today, it gives you one answer. Right? And positions it as the ultimate truth. Right? And it's so risky that we humans accept that. Do we really need to keep pushing this forward? Do we have more than enough technology here to keep us busy for the next five or 10 years? How do those two interplay more than enough technology? It's I'm so glad you brought that up. It's so funny because I mean, I get it, right? I get the idea that, you know, I get the idea that every company sort of like the Google's a Microsoft Open AI is the Anthropics, the, you know, Metas XAI, etc. They all want to be kind of a top-the-leader board. Right? And that for the best tech and believe me, I totally get it. And it's, you know, but actually, you know, if you think about it, it's one of these things where if you watch a Jeep commercial or a Range Rover commercial, like what are the Jeeps and Range Rover's doing? Right? They are doing things like they are going over mountains. These are people that are like stuck in flowing rivers. And there's a hippocommon after them and they have six people in the car and all that like, oh, I got you got to survive. They're doing unbelievable things. Meanwhile, I live in New Canaan, Connecticut. There's Range Rovers and Jeeps all over the place. What are they driving? They're driving over paved roads. Right? They are, they're driving from their home, their three acre home, order to the trade station. So why are we all buying these things? Right? Like I have an Apple Watch on right now that can go like 100 meters underwater and 18,000 feet. You think I'm ever doing that? No, it's a, it's sort of a feeling. Oh, but it could, which means it's the best. Right? Whereas what I would say is that if we just put everybody and we knew everybody was going to use Chantupt 3.4 or 5, which was almost the original, or closely original model that came out two years ago. And if everybody is actually using it, you know, 20 times a day, we'd be much further on. So I get the tech. I appreciate the tech. And I'm all over the tech. I'm posting about it all the time on LinkedIn, everything like that. But to your point, your exact point, the reality is we have not caught up with the tech. To be a winner in this new world, you really have to learn to parse out what is true and what is fake. You really have to have the ability to parse out what the media is telling you. To serve their own agendas and what they're telling you that is actually true. You know, you have to parse out what actually happened versus opinion. You know, what actually is the truth versus the shiny headline. And this is now going to be much more potent with artificial intelligence in charge because they have masters human manipulation. I keep getting more and more intense. Actually, as one would expect. Approaching the singularity and all that, right? So I mean, it's interesting to see it all happening. I think progress is quite amazing. And looks exactly like you would think you're for in the last few years before a breakthrough to a GI and singularity. So it sounds like you're still pretty bullish that were marching for. I'm super bullish, man, you know, before, literally before breakfast this morning, I made like 10 Python programs to test versions of some AI algorithm I made up just by vibe coding and LLM platforms. But before we had these tools, each of those would have taken me half a day. Right. So I mean, it's sped up. Prodotyping research ideas by a factor of 20 to 50 or something, right? I mean, and that that's tools that we have now that are not remotely a GI. They're just very useful useful research assistants, but we are at the point where the AI tooling is helping us develop AI faster, right? And that is exactly what you would think in the end game period before a singularity. Well, and that can create a snowball effect, right? If it's helping us research itself faster or any of these spaces faster than doing that right now. Yeah, I mean, that is that is why we're able to see the pace that we don't see. Yeah. So, so maybe just to take a step back, Ben, I mean, artificial general intelligence. This is a phrase that, you know, you coined over a decade ago and has been getting a lot of press lately. In addition to super intelligence. And so I wanted to ask you maybe just to do a little bit of table setting. How do you define artificial general intelligence and you know, why is it important? Why does it matter? And how does it differ if at all, you know, practically from something like super intelligence. So informally. What we mean by a GI tends to be the ability to generalize roughly as well as people can. So to make leaps beyond what you've been taught and what you've been programmed for to make those leaps. You know, roughly as well as people and that's that's an informal concept. I mean, I mean, it's not it's not a mathematical concept. There's there's a mathematical theory. Of general intelligence and it more deals with like what does it mean to be really, really, really, really intelligent like it's you can look at general intelligence says the ability to achieve arbitrary computable goals and arbitrary computable environments. If you look at an abstract math definition of general intelligence, you conclude humans are not very far along right like I I cannot even run a maze and 750 dimensions, you know, let let alone prove a randomly generated math theorem of length 10,000 characters. I mean, I mean, we're we're adapted to do the things that we evolved to do in our environment. Right, we're not we're not utterly general system. So I mean, super intelligence is also a very informally defined concept, what it basically means is a system whose general intelligence is way above the human level of general intelligence. So we can it can make creative leaps beyond what it knows. Way, way better than than a person can, right? And I mean, it's pretty clear. That's possible. I mean, just as we're not the fastest running or highest jumping possible creatures were probably not the smartest thinking possible creatures. And we can see examples of humans to pittedly like around us every day or even very smart people like I can hold I'm pretty clever, but I can hold 10, 15 things in my memory at one time without getting confused now some autistic people can do better, but I mean, you know, there are many limitations of being a human brain and it seems clear. Here, some physical system could do better than that. And then the relation between human level AGI and ASI is interesting because it seems like once you get a human level AGI, like a computer system that on the one hand can generalize and imagine and create as well as a person on the other hand is inside a computer. It seems like that human level AGI should pretty rapidly create or become an ASI because I mean, it can look at its entire ramp stage, it knows all its source code, it can copy itself and tweak itself and run that copy on different machines experimentally, right? I mean, it seems like a human level AGI will have much greater ability to self understand and self modify than the human level human, which should should lead should lead to ASI fairly rapidly. And you know, we've seen in the commercial world, some attempts by business and marketing people to fudge around with what is a GI. I mean, I think within the research world, the notion that an AGI should be able to generalize very well beyond its training data, at least as well as people. I think that's well recognized. I mean, I've seen Sam Altman has come out saying, well, maybe something to do 95% of human jobs, we should call it an AGI. And I mean, you can, you can call it what you want. It's fine, but it is a different concept than having human like generalization ability, right? Like if you can do 95% of human jobs by being trained in all of them, I mean that that may be super, super economically useful, but it's different than being able to take big leaps beyond your training data. The technology is moving faster than any other sector, faster than the economy, faster than society is moving, faster than education is moving. And if we truly want to understand where humans play in that picture, the fact that we're investing everything we have in technology has already indicated our preference for technology. Over humans. So that math has to balance out a bit. We have to figure out how do we invest so much more into education, not so much less. And until we do that, we are going to be behind the ball. We are going to have a target on our back in many ways because if the paradigms don't change, the technology gets better, we're going to suffer the consequences. But if we put ourselves front and center of that equation, we have the chance and the opportunity to figure that out. Right. It's wow. Yeah, it's it's as you said it like this is not an incremental shift. This is like a complete disruption of the model for men to end without it. And even for people who live and breathe it, like it's overwhelming for me. I do this 24 seven. I love it. I'm passionate about it. I'm excited about where we're going and net net. I'm optimistic about the long term future. But we are all pioneers right now, whether we want to be or not. And when people we kind of bastardize the term pioneer, we've made it seem like, oh, it's Richard Branson, the cover of entrepreneur magazine with his billions of dollars to success. He was a pioneer at one point in time, but yeah, high and years do really hard shit. And they go to places where there's no infrastructure. They suffer the consequences of, you know, decisions that they didn't know they'd have to make. They are attacked by the environment that they're in. Nature tries to kill them in a number of different ways. Yeah. And as a super resilient species, we still make a way forward. We construct the environment after we figured out, you know, we might show up and Hawaii with snow shoes on. And realize, oh crap, I'm not properly equipped for this. And then we figure our way out that time to go from not knowing to knowing can be really hard, painful and challenging. But the way we thrive once we do is absolutely amazing. So I would say that we are going to have amazing things happen. But we're also going to have to encounter some really tough growing pains individually and collectively to get there. So if anyone's saying otherwise, it's absolutely smoking mirrors. Whether there's a threat to creating this intelligence that, you know, looks at this, you know, kind of human pandemonium and says, you know what, you know, A.I. is taking the wheel now. Humans can't be trusted with human affairs. And this word that we were so anchored on of choice and the ability to be able to do it. Inevitable and the A.G.I. will be right. I mean, and then human governance systems become more like the student council or my high school was. I mean, I mean, I think if you set aside an A.G.I. I mean, we can develop better and better bio weapons. There will be nano weapons. I mean, cybersecurity. I mean, I think I think I think it seems almost inevitable that rational humans would democratically choose to put a compassionate A.G.I. in some sort of governance role given what the alternatives appear to be. And that kind of goofball analogy of often given is the squirrels in Yellowstone Park. Like we're sort of in charge of them. We're not actually micromanaging their lives, right? Like we're not telling the squirrels who to make with or what tree to tree to climb up or something like that. Right. You know, if there was a massive war between the white tails and the brown tails squirrels and there's massive squirrel slaughter. We might somehow intervene and move some of them across the river or something. If there's a plague, we would go in and give them medicine that by and large, we know that for them to be squirrels, they need to regulate their own lives in their in their squirrely way. Right. That is what you would hope from a beneficial super intelligence. I give it would know that people would feel disempowered and unsatisfied to have their lives and their governments micromanaged by some by some AI system. What you would hope is a beneficial AGI is kind of there in the background as a safety mechanism. If it would stop stupid worse from popping up all over the world, like we see right now. I mean, I think that would be quite beneficial. I don't see why we humans need the AGI to decide like, you know, what rights, what rights do do children have like what. What how is the public school system regulated or something there's lots of lots of aspects of human life. They're going to be better dealt with by humans collectively making decisions for other humans with whom they entered into a social contract. Right. So I mean, I think anyway, there are clearly beneficial avenues. I mean, there's also many dystopic avenues, which we've all heard heard plenty about. I don't see any reason why dystopic avenues are highly probable. But I'm really more worried about what nasty people do with early stage AGI's right. I mean, I think there's a lot of possible A.I. minds that could be built. There's a lot of possible goals, motivational and aesthetic systems that AGI's could have. I don't think we need to worry that much about like the AGI is built to be compassionate, loving and nice. It's helping everyone. Then suddenly it reverses and starts slodgering everyone, right. I mean, it could happen. There's totally no reason to think that's likely. On the other hand, the idea that some powerful party with a lot of money could try to build the smartest AGI in the world to promote their own interest above everybody else's and make everyone else fall into line according to their will. Like that. That's a very immediate and palpable threat. Right. And that that even if that doesn't affect the ultimate super intelligence, you get it could make things very unpleasant for like five, 10, 20 years along the way, which which matters a lot to us. We have to remember that all of these technologies we're discussing are in their infancy. And that historically when you look at the advent of new, particularly media forms of media. It takes years for society to figure out what they're for. The telephone for the first 25 years of the telephone's life. The telephone industry actively tried to discourage people from using it to gossip to catch up with friends. They thought it was a business tool beneath the function of the technology. Yeah. You shouldn't be shouldn't squander this thing on chatting with your mom. You should it's a business tool. And they actually, like I said, actively discourage people from they didn't realize what it was until the 20s, which is, you know, when does Alexander Graham Bell and the telephone 1873. It's it's the end of the 1920s before they wake up to what it is. So the telephone is a bigger deal than Facebook and Twitter. Yeah. But it strikes me that Facebook and Twitter are are still in their infancy. They're really young. Do I even it's quite possible that if we have this conversation five years from now. Not neither of you both of us would have only a dim memory of this thing called Facebook. Yeah. Or Twitter or I don't know or the opposite. Yeah. That it completely dominates our life. I just don't think the only confidence I have is that we will be using these technologies in unanticipated ways. Yeah. In the future, but I but no one can predict what those unanticipated ways are. I think I have a lot of confidence that whatever employment dislocation is caused by AI will be short and not painless, but less painful than we think. I started to think the gloom and I don't buy the dooms the gloom and doom on it. Yeah. Just think like we always say this every time something comes along and never pans out that everyone has nothing to do. It's become kind of Malthusian, right? Like this wave is going to. And I think people have more ingenuity than that. And also I think that we're probably a lot further off from truly transformative AI than we realize I just I'm up I'm on the kind of my expectations are. But also I also some things that believe that a lot of the most. You know, revolutionary uses of AI are some of its simplest ones and doesn't need isn't need to be this incredibly mind blowing technological accomplishment to make a difference in our lives. And we're going to be holding an organizer information and standing at the ready to give good answers to problems is huge. I mean, if that's all that did be transformative. Things are going to look so different in the next couple years unless you are radical with your thinking you will not be ready for the disruptions that are going to come. Middle management is also getting hit really hard. What that makes space for is for people to step into actual roles of leadership. I can imagine a world where there's angst if we're not looking forward and we're still letting yesterday's mental models collide with tomorrow's technologies. That is how we lose. We are all pioneers right now, whether we want to be or not. I would encourage organizations to be radical with their thinking and practical with their approach. So there's there are too many people say you kind of need to break burden all the ground start fresh. There's no enterprise that says we're profitable. We're doing just fine. We want to disrupt that nobody says that. But what I do think is unless you are radical with your thinking you will not be ready for the disruptions that are going to come. So these technological transformations that happen at GBT level so general purpose technology start at the infrastructure level. So we've seen disruption with technology and the technology that we use. So electricity to the same thing and open the eye to the same thing with GBT. So now we're all using it. But over time those disruptions move up a level from infrastructure to application to industry. So if you're not. Okay. I guess it is explosive. But if you're not thinking radically about the transformation that can happen at each one of those levels and also the transformation that can happen to your industry and you're just focused on the date of what you have now. You're missing one of the critical shifts of transformation in the business and there's a. A theme that's becoming more popular right now is going moving from insight to for site. And when everything is changing around you insights valuable it's how you create structure around a business that you can take to market. For site is about how you avoid getting disrupted if we're not looking forward and we're still letting yesterday's mental models collide with tomorrow's technologies. That is how we lose. But if we are in radical with the way we think with the idea ability to test different business models put things to market faster when we might not previously get that data and that feedback loop as fast as possible. We're going to learn more about that unexplored terrain way faster. So I wouldn't say go and disrupt your your one billion dollar revenue line. But you absolutely should be incubating things that will because there are hundreds and eventually thousands of other startups that are doing exactly that. And you will have no defense against that if you're not thinking in that way. So think radically approach practically so that next step goes okay so what do we do to implement this is it tiger teams is it small scunk works all of those are viable. I do believe that having in its transformation you need to find people who are leaning in and already self selecting as the people who are like I'm all about this I want to do this. Don't try and convince a bunch of people who might not be invested in the nest to be the first ones through the door. They will be unenthusiastic about it. They don't have the willpower to get through the challenges. It's going to be hard and they're going to fail a million times before they get it right. If they're not already passionate about this they're going to stop at the first sign of trouble. Those people can be followers of the people who lead the way is not that they're irrelevant. You need to find the people who are like I want to be the person who kicks the door down. I want the first person in the room and those are the ones you want to build your teams around to think about these things and build different ideas and find the chinkers. Find the people who may not be the developers of the engineers who are already tinkering with the stuff. There are so many people who are using AI and building their own agents or creating side businesses on the weekends who could also be resources for this. That's the culture that will create new opportunities, new business models. They're going to learn what these new paradigms look like by doing the work in that space. That then can be diffused across the organization and that's the second most important part. Once you have the knowledge, do you have the infrastructure set up to defuse that knowledge as fast as possible and as thoroughly as possible across the organization? Otherwise, it just stays compartmentalized and it dies on the vine. I think that not enough companies appreciate that innovation demands waste. If you are doing something that you've done before, you know exactly how it's going to go. Then of course you can have these KPIs that you know you're going to hit for sure because you've already done it. Now you're trying your completely new technology, the completely new use case. You have no idea if it's going to work. You have to be willing to accept that that might be time and effort thrown, you know, burnt at the altar of innovation. So to speak, right? That is just the nature of innovation. And I've had companies come and consult with me. They really wanted to be innovators, but when I ask them, so what is your actual tolerance for getting no results back after you invest in innovation? For how much bandwidth do you give your people to do things that are very specific work product that you expect from them? Do you give them time and space to chase an idea? And quite often the answer is no, no, we don't. We have no tolerance for innovation. We have absolute enough slack for our people and we need every project to be predictable. Okay, if you're dealing with that, you're just not going to be an innovator or you're going to be an accidental innovator. Because you somehow accidentally hired somebody who's going to essentially work two jobs, the one you gave them and then you know the other one they'll spend nights in the office. And maybe they'll come up with something but there won't be a lot of these folks. And yeah, that's not a great lottery ticket. So if you don't have that tolerance for ROI when you're trying to innovate, just you have to be a follower. Just wait for everybody else to show how it's done and follow them. When you say narrow AI, what does that mean to you? And is there a threshold where it gets too broad and that creates the risk for us? So typically it's a system designed for a specific purpose. It can do one thing while it can play chess well. It can do protein folding well. It's getting fuzzy when it becomes a large neural network with lots of capabilities. So I think sufficiently advanced narrow AI tends to shift over its more general capabilities or it can be quickly repurposed to do that. But it's still a much better path forward than feeding it all the data in the world and seeing what happens. So if you restrict your training data to a specific domain, just play chess. It's very unlikely to also be able to do synthetic biology. Right. Well, and it feels like we're very much on the course of chess and synthetic biology at the same time. Right? Is that your you kind of outlook for where all the money is going and what people are racing toward? They explicitly saying it's super intelligence now. They skipped AI. It's no longer even fun to talk about. They directly going, we have a super intelligence team. We have super intelligence safety team. You couldn't do it for the GI. So you said, let's tackle a harder problem. I think there might be a role for professional societies. We haven't had that before in computing. All right. So I get to call myself a computer scientist and you know, and I have some degrees and some experience. I don't have anything official and anybody could just say, all right. I'm a computer scientist or I'm a software engineer and I'm going to release some software and they let you do it. It's great. In other fields, they don't do that. I couldn't go out tomorrow and say, you know what? I'm going to call myself a civil engineer and I'm going to go build a bridge. They don't let you do that. You need to be certified in order to do those kinds of things. I don't want to slow down the software industry, but I think there might be a role to say if you get to a certain level of power of these models, maybe there should be some certification of the engineers involved. I mentioned you on the coin. You're really pushing hard for these open models. I was saying, you know, wait a minute. Maybe it'd be good if somebody's making your query to do something terrible that it gets locked somewhere. And I guess another person I can mention that I've seen the shift in is my colleague Eric Schmidt, who was very adamant of saying we can't have open models because of the threat from bad actors, you know, two or three years ago. And now he's switched and said, it's too late. These models are powerful enough. The bad actors want to use them. They can create them. So we might as well harvest the good of the open models because the bad guys have got them anyways. And I think that's right. I think there's nothing you can do about that now. AI systems always make mistakes. Just sometimes it takes quite a lot of scale to see them. Right? What happens on a even a very functional AI system. We still say you will meet the long tail, find the outliers, the weirdo situations that you did not see coming. Even when it's highly performed. So expect, expect something's going to. And so you anticipate that there will be mistakes. Now the question is when a mistake touches a user. Who has a particular kind of expectation. What then happens? How much how flammable is that? Your your AI infrastructure. Of course, it's all the obvious things, the actual infrastructure in your data pipelines and all the rest. But it's also things, intangible things like. At what stage are my user expectations? Have I managed them sufficiently where I even could be deploying to users? What about internally? If I'm making, if I'm doing some internal corporate engineering, I'm offering some, you know. And now we're looking at the digital employee experience. I'm offering some tools to my employees and digital tools. Have I managed their expectations? Have I trained my staff? Do they know how to think about these tools? Let's say I need humans in the loop. Am I sure my human will be in the loop or might they be asleep at the wheel? And how do I do the training? And how do I put in maybe a collection depending on the importance of the task? I might need to think about having multiple humans in the loop. I might need to think about consensus. There are all kinds of measurement infrastructure, things that we would need to put in place. In generative AI, we've just seen endless right answers, nightmare. And we've got a nightmare challenge for management because we've got to change our paradigm. And we've got to think differently about measurements of metrics. Have we done that? Have we put this in place? Do we have testing pipelines? Do we have experimentation pipelines? Do we know how we're going to roll things back if we need to? Do we know what we're going to, what versions we're going to go to? Do we actually know what will happen in what kind of scenario? Do we know how we're going to make our guardrails? Who sets those guardrails? How do we update them? How are we going to react to legal changes? All this stuff. No, okay, I know it's Hedge and Monaco. You can't say everything is AI infrastructure. But to be ready for AI, there is a lot of stuff that you would need to be ready. And so one of the ways that you can dodge a lot of this is that you do outsource some piece to a vendor who is supposed to do all of it for you. And you just check that you're getting precisely what you need and you have to still articulate what it is that you need. And you have to worry about measurement wise that there is going to be a gap a hole between what the vendor sees and what you see. There's going to be some bit in the middle that nobody sees. And that could be a huge risk, not just in terms of security, but in terms of your system slowly going sideways with neither party notes. Do we need to separate the AI from the person if you've got a cadre of employees who have figured out like they've got another tool in their toolkit. Do we need to care that it's AI is their risk to this? How should we be thinking about that? I think this is such a great question because it goes to the point one of the points and I think sort of one of the underlying things is, you know, is AI cheating, for example, you know, sure people use it. And I think no, right? I mean, this isn't it's not high school. You're not getting graded on your on your paper, right? I have kids and they can't use generative AI for writing papers, but they can use it for learning biology better. So it really depends, right? But I think that that most importantly, first of all, you have to understand that there are actual laws and limitations around this. Like you can't just produce something that's AI either an imagery or video or even text, to be honest, and just put that out into the world as yours just because you can't copyright it. That's a legal issue. But beyond that, we should have everybody in the organization with guardrails in place, of course, using this and why is that because it is going to augment what they are good at. The sometimes example uses if you put me up against a marker and you said, OK, in 20 minutes, both come both of you come up with a new idea for a shoe company or something like that, right? I would produce something really awesome, even though I'm not a marketer, right? Because Chatchy PT would help guide me and it would be amazing. And after 20 minutes would be incredible. But the marketers work product would be 10 times better than mine. Why? Because they understand what quality looks like the hinders said it's sort of like when you say, hey, right, a poem and you read this poem by Chatchy, you're like, this is great. But real Poe would be like, that's literal trash. Like that looks like poem trash, right? Because the person who actually has the brain that this, this tools are going to augment, understand how to guide it, understand what quality looks like, et cetera, and to not have your people using that as kind of an iron man suit, you're really just shooting yourself in the foot. If you work in IT, Info Tech Research Group is a name you need to know, no matter what your needs are, Info Tech has you covered. AI strategy covered disaster recovery covered vendor negotiation covered Info Tech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe.