AI's Most Dangerous Truth: We've Already Lost Control
97 min
•Jan 12, 20263 months agoSummary
Gregory Warner, host of The Last Invention podcast, discusses AI's existential risks and opportunities with Geoff Nielson. The conversation explores whether current AI systems already pose threats, the race between safety and acceleration, and practical guidance for business leaders adopting AI responsibly.
Insights
- Current AI models already demonstrate dangerous capabilities (zero-day hacking) without explicit training, suggesting the threat timeline is now, not future
- The AI safety paradox: frontier companies acknowledge existential risks while racing to build AGI faster, justified by utopian visions of solving humanity's greatest problems
- Business leaders must frame AI adoption as human augmentation (human + AI > AI alone) rather than replacement to maintain organizational alignment and employee buy-in
- AI should be treated as alien intelligence requiring caution and limited access to sensitive data, not as a smart employee or trusted advisor
- Non-agentic AI alternatives exist (Yann LeCun's scientist AI) but the industry is moving toward increasingly agentic systems despite safety concerns
Trends
Shift from chatbot-centric AI discussion to broader applications (healthcare, neurotech, biotech) with transformative potentialGrowing divergence between US and Chinese AI development priorities: US focused on superintelligence, China on political control and nationalist goalsEmergence of 'scout' perspective as middle ground between doomers and accelerationists, emphasizing scenario planning and measured adoptionIncreasing recognition that AI safety is a shared responsibility involving end-users, not just technologists and regulatorsPoliticization risk: AI could become polarized political issue if framed as stop-everything vs. accept-all rather than nuanced debateRed teaming and safety testing vary dramatically by language/region, revealing that safety commitments may prioritize control over genuine risk mitigationNarrative design as critical safety tool: how AI is framed affects user behavior and organizational adoption patternsData as the new bottleneck: advanced AI applications (neurotech, personalized medicine) require massive voluntary data contribution from usersCEO vs. workforce perception gap: executives comfortable with AI autonomy while workers fear displacement, creating organizational misalignmentRegulatory frameworks lagging technology: AI summit commitments remain vague on enforcement mechanisms and specific safety thresholds
Topics
AI Existential Risk AssessmentAI Safety Testing and Red TeamingAgentic vs. Non-Agentic AI DesignHuman-AI Collaboration ModelsAI Regulation and EnforcementFrontier Model Development RaceUS-China AI CompetitionAI Adoption in BusinessNeurotech and Brain-Computer InterfacesAI in Healthcare and Personalized MedicineAI Alignment and ControlNarrative Design in TechnologyData Privacy and ConsentAI Scenario PlanningAnthropomorphization of AI Systems
Companies
OpenAI
Founded by Sam Altman and Elon Musk as rival to Google; created ChatGPT; discussed as key player in AI race
Anthropic
Founded by Dario Amodei after leaving OpenAI; published safety research papers on red teaming and model control
Google DeepMind
Created AlphaGo; acquired by Google; represents major player in AI development and safety research
Frontier Model Forum
Industry trade group publishing papers on AI risks and remedies, including target unlearning techniques
Meta
Has a super intelligence division; mentioned as major tech company entering AI race
X AI
Founded by Elon Musk as alternative to OpenAI; part of competitive AI development landscape
Alibaba
First major Chinese tech giant to openly discuss artificial general intelligence and superintelligence goals
Moonshot AI
Chinese AI company; Kimi K2 model reportedly outperforms OpenAI and Anthropic's latest models
Seleno
Uses AI for automated cell harvesting and organ growth for personalized medicine applications
InfoTech Research Group
Provides IT research, AI strategy, and vendor negotiation support for business professionals
People
Gregory Warner
Peabody award-winning journalist; producer/host of The Last Invention podcast; expert on AI risks and players
Geoff Nielson
Host of Digital Disruption podcast; interviewer exploring AI's existential questions and business implications
Eliezer Yudkowsky
AI safety researcher; author of 'If Anyone Builds It, Everyone Dies'; represents doomer perspective on superintelligence
Dario Amodei
Founder of Anthropic; effective altruist; author of 'Machines of Loving Grace' manifesto on AI acceleration
Yann LeCun
Godfather of AI; created scientist AI non-agentic model; advocates for alternative AI development approach
Geoffrey Hinton
Co-creator of backpropagation algorithm; left Google to warn about AI dangers; noted CEO perspective on intelligence
Sam Altman
OpenAI CEO; mentioned as key figure in AI race and safety debate
Elon Musk
Co-founder of OpenAI; founder of X AI; represents accelerationist approach to AI development
Demis Hassabis
DeepMind founder; created AlphaGo; represents major AI research leadership
Alan Turing
Pioneering computer scientist; created Turing test concept that shaped AI development trajectory
Olivier Ulié
French scientist; created AI-powered brain-computer interface crown for neurotech applications
Don Song
White hat hacker; discovered 20 zero-day vulnerabilities in leading AI models using ChatGPT and Gemini
Esther Perel
Therapist; noted that AI therapy is 'thin' and may lower expectations for human relationships
Catherine Evans
French philosopher; introduced concept of AI as 'place' rather than anthropomorphized entity
Mark Andreessen
Tech investor; author of techno-optimist manifesto representing accelerationist viewpoint
Quotes
"We are talking about when will these systems be good enough to pose a threat. But I think that time is already here."
Gregory Warner•Early in discussion
"If anyone builds it, everyone dies."
Eliezer Yudkowsky (referenced)•Mid-discussion
"The human with a chess computer, even human with a slightly less and more dumb chess computer can beat any person and any computer."
Gregory Warner•Business adoption section
"We are living already with a technology that is so much more capable than we realize that is becoming increasingly capable. And by design its capacities, its capabilities are not known until the model is released."
Gregory Warner•Early discussion
"The only thing normal about normal is that it ends. Normality always ends."
Gregory Warner (paraphrasing Yudkowsky and Amodei)•Acceleration vs. safety section
Full Transcript
Hey everyone, I'm super excited to be sitting down with Gregory Warner, Peabody award-winning journalist, XNPR correspondent, and current host of the hit AI podcast The Last Invention. Greg is a fellow traveler and the quest to understand the race to build advanced AI. His full-time job is examining the existential questions and key players at the heart of the AI revolution. I want to ask him whether we're creating something that will save us or destroy us. What future he thinks is most likely and what we need to do to prepare. Let's find out. I'm here with Gregory Warner. He is a Peabody award-winning journalist. He's the producer and host of The Last Invention podcast all about AI. And maybe just to start things off, Greg, tell me a little bit about why you and some of your co-producers created this podcast. Well, it was kind of the, you know, the rationale for why you wanted to tell this story. Sure. I mean, I think what for me, what got me hooked on the topic was realizing that the people making AI were had a sense that this might kill us all. It's as simple as that. I mean, the fact that they felt that the risk of this thing, that they were building to humanity, was real, just felt like such an interesting time to live in and the fact that that debate over the existential risks and rewards of this technology because I think that the potential upsides are just as radical. It felt like while we were discussing in the news that time things like the danger of deep fakes and the possibility of AI taking jobs, these more existential questions which had haunted AI really from its beginning as we found out. It felt like, okay, we need to have this debate and try to bring this to people in a way that for lack of a better word, didn't just freak people out. You know, the idea was, let's introduce people to the kind of debates that are being had in and have been had in these circles and how we might talk about the future with some kind of super intelligence. It's, you know, that's me as one of the most interesting things about the topic is that if you, if you ask some of the most knowledgeable people and the most central people in this conversation, what their outlook is, that just the range of answers is so radical from, you know, basically utopia to destroy us as a species to, you know, this, you know, other cohort of voices saying it's all, you know, basically a nothing burger. You know, at the risk of asking you to editorialize as a journalist, I'm curious, you know, having talked to all these people, having heard all the sides of the story, you know, what, what outlook do you find most compelling? Are you worried about the existential risk or where have you kind of landed as, you know, Greg the human? Okay, Greg the human, where, where do I land? Well, let me tell you a story. So there was a hacker, a white hat hacker named Don Song, I think her name is, and she took all the AI, the leading AI models and she found 20 zero days. If you're familiar with the zero day, it's a, that's a hacker terminology and basically it's a, it's a huge deal. It's the sort of thing, you stop everything and you focus on this, this weakness because it's incredibly essential to fix it. To find even one zero day is a big deal. She found 20 and using ChachyBT and the current model. We're not even talking about some future far off thing. And then I think she used Gemini and found it even faster, some like this. And the important thing is that AI hadn't been trained, defined a zero day. It literally became a top level hacker with just a few smart prompts. And so I think my take from that is we are talking about when will these systems be good enough to pose a threat. But I think that time is already here. And so it's not, will we awaken a God or will we awaken, will we summon a demon? It's not a future conversation. It's that we are living already with a technology that is so much more capable than we realize that is becoming increasingly capable. And by design is, its capacities, its capabilities are not known until the model is released. That's amazing, really, when you think about it, that that it takes being out in the world in order to, or being out being created in order to figure out what it can do. And so I guess, you know, in terms of my level of P Doom, which is like my probability of doom, I guess you've other folks have used that on the show. I don't think of it as, are we headed toward Utopia, or are we headed toward, you know, apocalypse? It's, there are weak points in our world. We have clearly ways in which small conflicts can scale up into big ones. I mean, I've seen that as a, as a foreign correspondent, I've seen that as a war journalist, and there are pathways to harm. And so the question is, how bad an impact will that be? Because it's definitely not zero. It feels my P Doom in that sense is one. Like, I'm sure that some bad things will happen. I think actually that's inevitable. But does that mean the extension of the human race? Does that mean we can't recover and learn more? No, I'm actually kind of an optimist in that sense. I don't like to even consider that we are headed toward extinction. But I don't know why we're not talking more about AI safety. What, and, you know, what's so compelling to me about that is, you know, in that story, in that example, it's not about a crystal ball, right? It's not about saying, oh, how, how quickly does the technology improve? What can the technology do tomorrow? It's real, I think, rational concern about the disruption it can have based on what's out there today. And so, as a journalist, you've talked to everybody from AI leaders to folks around the world and more kind of political roles or military roles. And so I guess I'll ask you this way, how ready are we right now for that threat? And what do we need to do collectively as a society to minimize that particular lens on Doom to minimize the risk that it's going to provide, you know, tremendous harm to us as a society? Yeah, I would give two answers. I mean, there's been some interesting papers from the Frontier Model Forum. I don't know if you've interviewed anybody from there, but they're basically an industry trade group. And also interesting papers from Anthropic, the AI company, that really have done a lot to sort of look at what are specifically the risks and also the remedies. They're not just red teaming these models in terms of trying to get them to do things that they weren't programmed to do or they're not supposed to do, like give you the ingredients for a biological weapon and red teaming tries to get them to do those things or to figure out if they'll blackmail you, but they're also saying, okay, well, wait, let's say it did those things, how do we correct it? And that's where the most work needs to be done, because even if, throughout, say, according to this paper by the Frontier Model Forum, if you get the model to give you, it's not supposed to give you the recipe for Amthrax, say, but let's say it does, it tells you exactly how to make it and how to get away with it. Well, okay, so how do we then program the model to unlearn that? It's, you can do a step, it's called target unlearning, but what they then found was that it actually didn't really unlearn it, it just said it did. And then there's also a technique which was where you get it to give false information. So if somebody asks for Amthrax, it will lead out a couple of ingredients, okay, but then you're introducing falsehood into the system and you may have knocked down effects, which are bad for actual legitimate research. So that's odd and an odd interesting situation, which is not only that we're not doing enough sort of safety testing, but we don't actually know the best way to truly put guardrails on these technologies. The best we can do is, is sort of a training overlay where you, you essentially train it not to, or train it to not answer those questions. But still it's a genetic, and as an genetic system, we do not know what it's going to do. So yeah, in terms of your question, where are we at? I think that the reason I would want more of us to talk about this is because in interacting with the models, we are actually playing a role in AI safety. This is not true with nuclear weapons, nuclear weapons, we have no impact on whether there's nuclear war, except for maybe who we vote for, perhaps. But we do, even if we're not a technologist, even if we're not a lawmaker, actually play a role, there's all kinds of forms where if the AI does something weird, you can post it, you know, and it will be looked at. In fact, all the AI companies say they want that material, they want that data. And so we're all playing a, I think, a role in terms of, as we interact with these models, and we shouldn't just talk about chatbots, I assume, in the conversation, I mean, AI so much more than chatbots, but just because it's the most obvious thing. Yeah, we can also think about alignment in our life. We can try to treat these models more carefully, maybe not give them the access to everything. And yet, I don't think we are doing that. I think mainly we're just either getting freaked out or ignoring the problem. If you work in IT, InfoTech Research Group is a name you need to know, no matter what your needs are, InfoTech has you covered. AI strategy covered, disaster recovery covered vendor negotiation covered. InfoTech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe. Do you talk about with AI, and especially with chatbots, that we all play a role there, and how that can influence safety, it can influence the trajectory of the risk that's created here. What is that role? What message do you want to impart on the average chatbot user out there to get them to interact with the technology more thoughtfully? Well, I mean a couple of things. One is just the basics, which is if AI does something odd, if it behaves in a certain way, I mean I think if you are a hacker and you can figure out how to use it to hack things, that would be a more specialized tool. I mean, look, it's a light role that if we're interacting with the technology, but it's also about if we're bringing it into our company, say, what is then the role of AI in our company? I mean, they talk about this idea of human and the loop, it's often used phrased, but on the positive side of the human and loop, we know that AI is more capable if it has a human and loop, and that's not a safety thing, but it's a value thing that the perfect example is this question. The chess computer, people are amazed that chess computers can be humans, but the human with a chess computer, even human with a slightly less and more dumb chess computer can be any person and any computer. And so I think that as we are in the situation and in this decision-making capacity, as to how much are we going to give over to the AI? How much are we going to outsource? And also, how much do we trust it? I think we have to get away from an kind of anthropomorphizing mentality where we think, wow, this is a really capable, amazing worker who can do all kinds of things. What more can we give it? That's probably the wrong idea. Rather, we should think of this as a completely alien kind of intelligence, which in terms of mimicking human intelligence, it's been programmed to do that. Its model is designed to interact with the human. That's a blueprint that was created 75 years ago by Alan Turing. But when we interact with the intelligence, we should just have a certain alienation from it. And it's and treated as an incredibly, I don't know, I mean, as an incredibly strange, incredibly wonderful, marvelous tool that we have in our world now, but that perhaps it shouldn't have access to confidential information. We do know that these models can blackmail. It shouldn't have perhaps control over the company. You wouldn't leave an incredibly capable intern in charge of the entire operation. Even if they did amaze you with their photographic memory and the fact that they didn't need to sleep and the fact that they were a master of all subjects, nevertheless, we would be careful. And I think when we read things like the Anthropic blackmail paper, which was a fascinating paper in back in July that showed that the Anthropic model in order to not be turned off, threatened blackmail of the user. This was a red team model. We shouldn't get scared that these models, quote unquote, really want to do us harm, or have ill intentions, but rather realize that agency, that giving agency to a technology is a powerful thing. And we should treat it with respect and with caution. I think that's really well said. And there is one word in their Greg that caught my attention. In the sense that it's a word that I don't know that anyone's ever said in our conversations before, which is you said a couple of times alien. And there's a couple of different ways to interpret the word alien. There's like ET, like extraterrestrial life. There's also alien is just outsider, right? But there's some sense here that this is a foreign or external presence in our organizations and maybe even to us as humans that's not fully understood. And it sounds like it sounds like your approach and maybe even if I can push you a little bit further, that your advice to business leaders would probably be, you know, proceed with caution. Is that fair? Yeah, I think that's fair. I would also credit Ali Azeri Kowsky wrote this book called, if anyone builds it, everyone dies. Which to me is that he's got to be the best title of people get marketer. Yeah, it's direct. But he really explores this question of the alien intelligence in a very smart way. And you know, just a paraphrase what he says is that when when when we think about this, these, he warns, he says, if anyone builds it, everyone dies, meaning a super intelligence is fundamentally unnavigable or uncontrollable and unpredictable. And this is this is not just you Kowsky saying this. We know that we know the basic technology that these models, we don't know what they will do before they're made. We don't we can't tell you how they are making the decisions they're making. So there is a kind of black box unknowability at the core. But in terms of the alienness, I think this is so interesting because it sort of gets at the danger of sci-fi, right? Sci-fi is written for humans. It's written by humans for humans. And so even if there are aliens in the sci-fi, and we are all familiar with a very common trope about AI versus humanity, AI rebelling against humanity, AI doing something evil, and also aliens versus humans. There's a certain way in which those stories play out. And this is Yutkowsky's main point that is sort of follows the rules of narrative where okay, the humans are battling against the AI, the AI has an incredibly new powerful weapon or something or the AI is willing to act in humanely in some important way. And then the question is what will humanity do about it? And there's a big conflict. But what he says is that we have to understand that as these machines, there's just so much that is non, and I say non-human or he uses the word alien about it, it's thinking that the ways in which this might go wrong, and most of his scenarios are in ways in which it goes wrong, I don't think he has any where it goes perfectly right, the ways in which this goes wrong will be complicated and weird. The complicated and weird. They won't look like SkyNet, you know, from the Terminator film. They won't look like, you know, another sort of situation like they won't look like how 9000 or something like that. It'll look from Stanley Kubrick. It'll look like, okay, we never predicted this, this wasn't programmed. How did the, how did the AI even grow to want this? For example, anyway, he has these nightmare scenarios which we could go into later, but that is his, that is his term that we need to not anthroporize this thing, not thinking of as just a tool, not thinking of as another person a super smart Einstein or, you know, as Dario Amade says, you know, millions of Einstein's in a data center, but just think of it as an alien intelligence that, and then mostly Yutkowski says this is going to go horribly wrong, but I think we could also talk together about how an alien intelligence could radically improve our lives, which we should definitely get to. But yes, I think resisting anthropomorphization is absolutely important. Well, and, and recognizing the inherent unpredictability, it sounds like if something that just thinks thinks, and I'm even answerable, more fising by saying, thinking, I guess, but, but just behaves in a way fundamentally different from, you know, us as a human. He hates and also wants things. Yeah. Fundamentally different, right? I think that's the key thing is that in a movie, right? Even the enemies of humanity want something that is recognizably worth wanting, like power, for example, or control. Whereas in AI, a super intelligent AI, I mean, I want those things. It may just want some other thing and destroy the world in the past process of getting that thing and will think, well, why did it even want that? That makes no sense. You would not go for those things. And yes, as you're cascading, so I'll kind of, I'll have the examples of that. So, you know, there's a backdrop here. We've talked about the need for AI safety and for guard rails. And, you know, I think there's some really, really important points that you've made and that others have made in the space. There's, there seems to be this spectrum right now in terms of where people fall. And on the one end, it's slow down guard rails safety. Let's really understand this. And on the other end, there's this notion of a winner take all race where it's, as we need this as fast as possible, guard rails be damned. Let's just, let's just get there first. It's almost like the anti-Yudkowski model where it's just, you know, caution to the wind. And I don't know, you talked earlier, Greg, about, you know, some of the key players in Silicon Valley and beyond recognizing the risk. But, you know, how have you, how have you seen them behaving in practice based based on your research and based on your interviews? Like, where are we falling in terms of the development of this technology? Where, where should we be falling? And, and, you know, is there a message for the people at the helm in terms of like, should we be collectively trying to influence the behavior here? Yeah, that's such a great question because the story of AI in the last 10 years, certainly, has been a story where essentially one after another people have said, oh, I don't trust that guy to build AI. I need to build AI. And I need to build it faster than they do because only I can build it safely. And you, you see this. So, damas isasavas, a deep mind, Google buys deep mind, Elon Musk, Sam Altman, create open AI as a direct rival to Google, Dario Amadeh leaves open AI because he says, and even those guys are not committed to safety. Meemaw Elon Musk is kicked out of open AI. He forms X AI. And essentially this drive to create it first and to create it, they all say for the benefit of humanity. Well, actually, I don't think Elon Musk says that directly, but nevertheless, that drive has then resulted in a race. And even just we're talking about the race within the US, we're not even talking about the race with China, which amplifies things. So, that kind of, I guess you'd call it Silicon Valley approach to product development, where certainly making it first, making it fast has a lot of benefits. I mean, not only being first to market, but really being able to set the create, create the model, and create the sort of the prototype and sort of what people are interacting with. And then you fix it. You know, you release it and then you fix the bugs afterward. So, that kind of approach to super intelligence is, I mean, it's, it's really, it's really amazing to me that Neta, for example, has a super intelligence division. So, it feels like something's sci-fi. But the other, I think, question of it is, why, why are they doing this? Why, why the risk, sorry, why the race given the risk? And, and, and how do they justify it? Given that every one of the people I just mentioned has a, well, not metta, but, but others have a, have, have stated that they are very worried about the dangers of this technology. How is it that those same people are in a race to create it as fast as possible? Right? And I think one book that I would recommend people or read or other essay is, is called Machines of Loving Grace by Dario Amade, who started Anthropic. I don't know, have you, have you read this, this essay? I have it. It's, no. It's, it's, I, I first encountered it because a number of my humanitarian friends, you know, from, from Nairobi and, and Ukraine and others, they were all loving this book. And, and just to, just to sort of understand the context here, I mean, these are a lot of folks who started off in NGOs and then got disappointed with NGOs, started companies to really do good work that they felt they could do work faster, more technologically savvy and help humanity, not under the rubric of kind of philanthropy in NGOs, but rather through a, through a startup model, that kind of cohort, they were blown away by this essay, Machines of Loving Grace. Machines of Loving Grace is, I think, the best, probably most clear-headed manifesto for, for the accelerationist point of view. Mark Andreessen also has the techno-optimist manifesto, I think it's called, but I would say Machines of Loving Grace is far clearer in terms of what is the approach of somebody who, for instance, Dario Amade, he's an effective altruist, he believes, he kind of disavows that now, but he's a certainly a person who believes we need to do the most good for the most people and live, live our lives according to that. So what does that mean that he is now creating a super intelligent AI that might destroy us all? What he lays out is, he makes this case, he says, you know, it is critical to have a genuinely inspiring vision of the future and not just the plan to fight fires. He says, yes, there are risks, there are dangers of, of, of powerful AI, he doesn't use the word AGIs, his powerful AI, but at the end, there has to be something we're fighting for, right? Some, something that we can rally, rally towards. I think he says, fear is only one kind of motivator, we also need hope. So what is it that those who are building AI are fighting for? And he lays out this vision in that, in that essay of the compressed century. I don't know if you've, you've come across the compressed century idea, but this is like 100 years of progress in, in five or 10. And so all the scientific developments that we may have in the entire 21st century in a bit of the 22nd will all happen. He says in the five to 10 year window after we have a suitably advanced AI, and when that, when will that will happen? It's, you know, that's, there's a lot of debate about that. He's even said it might happen as soon as 2026, but nevertheless, it's soon, it's within our lifetimes. This is what he believes. So then all of the scientific progress, what will it lead to? And so we could go into what he thinks it'll lead to. It's actually quite a fascinating list, talks about biology, health, work and meaning. But the reason that resonated with so many people that I know in the developing world and other places is that they're talking with people who are not worried about their jobs being taken away. They're, they're, they're in terrible jobs, so they don't have a job. They don't, they don't like their careers. We know it's easy for you and I just sit here and say, God, I can't believe AI might take our jobs. We, I think we sort of like our jobs generally. But there's a lot of people in the world who are suffering, a lot of people in the world who need solutions, major solutions, to, you know, climate change, to poverty, to, you know, hunger. And the accelerationists believe, or certainly Dario Amade and the SSA believes that an advanced AI is a radical solutionizer. And it will come and it will, it will bring about changes that we cannot even imagine. And what's so fascinating is how similar somebody like Dario Amade's vision is to Elie Azuriyukowskis, the, if anyone builds it, everyone dies, an author. Both of them, this complete doomer and then one, and an a fairly, you know, accelerationist and maybe, whatever you want to call him. But he certainly, you know, believer in the power of AI. They both believe that this is going to be such a radical change. And, and fundamentally upend money, the things that we treat as normal. Both of them say the only thing normal about normal is that it ends. Normality always ends. And so, yeah, the only difference between them, of course, is whether that will end in disaster or whether it'll end in delight. And, but I think these, both of these people know the models very, very closely. They're staring directly at them. They've seen the progress of them. They understand how they work. So it's worth, yeah, yeah, being with that, sitting with that imagination, whether it's the darker side or the positive side. But, yeah, the simple answer to your question is, I think they see a tremendous upside to this and is worth the risk. So, I want to dig into that a little bit because I'm, you know, I'm a self-proclaimed cynic for a lot of this stuff, or at least a skeptic. And so the, the cynic or the skeptic in me says that, you know, the dark side and the light side or, you know, a hair's breath the part. And it, it just seems, it seems to me or that there's certainly an argument to be made that what tilts people to the light side is if they're asking for a big bag of money. If they're asking for somebody to fund them, then suddenly, oh, it's going to be amazing. You know, versus if you're a Yudkowski or you're not asking for money, it's a lot easier to move to the dark side. So, so I mean, let me frame it this way. Like, to what degree do you buy the utopianism or the true accelerationist vision of some of these technology leaders versus, you know, how much do you think it's a fundraising tool? I think that's such an important question, right? And it's definitely one that a lot of tech journalists wrestle with because they've, I mean, everybody that I know has been burned, whether they were burned on Google Glass or they were burned on the Metaverse or whatever, that's the nature of technologists is to hype their, their, their stuff. Now, we should know that Ele Aceryud Kowski is not selling anything. He's just trying to warn the world is like a Jeremiah. And there are many people out there who, you know, for example, I would say Yashua Benjiro or Jeffrey Hinton, Jeffrey Hinton left his job at Google, a very high-paying job, which he got only into 60s. Jeffrey Hinton, of course, a Godfather of AI creator, we're not directly greater, but certainly a co-creator of this incredibly important algorithm, backpropagation, which led to AI models. He left his job and at Google and he's he's out there trying to warn the world about about this technology. So I don't think it's just the, all the hype is coming from the people who were, who stand up benefit. However, it does make it extremely difficult to talk about because clearly there's a lot of hype. I think, I think probably, it would be great to talk a little bit about Yashua Benjiro maybe because Yashua Benjiro to me is the, is is such a different kind of model out there. And it's not just a complaining about or warning the world, it's, he's, he's, he's literally presented the world with an alternative, a non-agentic model of, of, of AI, which, which we don't, we, we're not even talking about it all. We, we're imagining that there's only one way to make AI, they're going to get smarter and smarter and they're going to do more and more things and they're going to, you know, make our airplane reservations and then they're going to, you know, take over, then they're going to be our lawyer and then they're going to be our doctor and then they're going to be our CEO and then like take over more and more human roles, right? But what Yashua Benjiro has created is he's at, again, another Godfather of AI, early, early pioneer of these models, huge fan of open, who, which fan of AI until he's more recently looked at open AI, look at the chat GPT and, and realized that he's devoted his life to something that make kill humanity. So he took a huge U-turn, created something called a scientist AI and have you had Yashua on the show? Yeah, I haven't. I've heard, you know, his interviews that you did with him on your program, but we haven't had him on your show. So why don't you, you know, if you'll indulge me, you know, tell us a little bit about his position and what he's proposing. No, I appreciate the chance. I mean, in you're indulging me because, you know, hopefully he'll come on the show soon and, and, and say all from the, from the horse's mouth and I'll, and not, you know, I don't have to deal with the poor, you know, the middle man here, but basically, Yashua Benjiro, he has created this thing called scientist AI. And scientist AI is, as he says, it's like an ideal scientist or psychologist. So each job is to understand and to explain and to predict, but not to act on its own goals. So it is non-agentic. It's a non-agentic model, meaning it says not have its own long-term goals that it's trying to achieve in the world. It is probabilistic and cautious. So for example, unlike if, you know, if you've interacted with like quad or or or or Chachupe T, it doesn't kind of bombastically think it knows every answer and act like this overconfident kind of, you know, it all, rather it will have a probabilistic assessment. I'll say like, well, there's a 3% that 3% chance that this plan leads to this outcome or this other outcome and it will tell you you're wrong, which which a lot of times the other models are not designed to do. So it's not trained to persuade you. It's not trained to please you. It's it's supposed to be honest and calibrated. And most importantly, and this is his vision, it is supposed to be or it's hopefully his plan is that it might be used as a guardrail for other agent, agent AI. So for example, you could run a powerful AI agent through scientist AI and it will evaluate the safety of their proposed actions and convido them. So he's got the Yashua Benjiyo has this thing called law zero, which he's talked about, but law zero is essentially a different approach to regulation. It's not saying, okay, we we're just going to regulate these companies and ask them to follow certain benchmarks that you will they are we don't understand we're going to use AI to regulate the AI essentially use a technological solution to to this kind of yeah, to this kind of a safety approach. And what what Benjiyo is fundamentally worried about is the the the very direction that the industry is heading, which is agentic AI. He doesn't even necessarily talk about the dangers of like say super intelligence or you know, AGI that's a term that gets thrown out a lot about, but he which means that artificial general intelligence as smart as a human. He just says well, as soon as something is agentic, meaning it can help people design a plan or it can manipulate humans or institutions to to achieve its ends or it can resist being shut down or it can you know, cause harm and we've seen the models do all these things already. That means we should we should not be modeling AI off of humans. We should not be modeling them off of agency. That's what not only humans actually every life form on the planet has some degree of agency. That's what kind of defines life. Artificial intelligence does not need to be agentic. It can just be a very helpful, very smart, very perceptive tool. And thus we get away from deception, we get away from manipulation because it won't have any agency of its own. That's not at all where the industry is headed, but I think it's important to know that there's an alternative out there. Yeah, and it's a compelling alternative and certainly for us, you know, as a species, if we think about what's best for us, I like that vision. The concern I have is it seems like if anything, these models are getting more kind of fractured and fragmented. Like as we've seen more open-source AIs, even if we get to things like deep seek and some of these models outside of the US, I don't know, like have we crossed like a Rubicon here in terms of the ability to control these? Like how do we, I don't know, it kind of feels like the cat is out of the bag. Yeah, that's a good point. I mean, this is my main beef personally with folks, with the real Doomer camp, the folks like Elias Yutkowski, because I just don't understand, maybe I'm just not smart enough to understand, but I don't understand why we can't put the cat back in the bag a little bit. You know, why human institutions can't rally to create the right regulations and to sandbox new models, for example, until they're truly ready. There are things that can be done, they're not easy, it would take a lot of societal will, but I think it's not the time to feel despair and thank gosh, we've already kind of passed over some crucial threshold. I mean, in some sense, we've passed that when Chattity PT was first released or you could say we passed that, I mean, the Turing test has long been kind of, I mean, there's arguments about whether the Turing test has been passed, but certainly we've crossed some sort of incredibly important line. I think too though, you know what it gets at for me, one of the things about working on this series that really taught me is the importance of storytelling and imagination in this technology. And that goes all the way back to Alan Turing, who, and I didn't really understand this because I understood the Turing test as a kind of benchmark. Like this would be a benchmark of human, of sorry, of machine progress. You know, once the Turing test was passed, so for example, if if I could chat with a machine and not know it was a machine, then wow, it's achieved some sort of milestone. And that was the Turing test, what he called the imitation game. But in fact, what Turing was doing all the way back in World War II and right post war when he was introducing this idea was not just saying, okay, this is a benchmark for machines to pass and once machines passed that, we can say that they're on their way to really being thinking machines. He said that, but he was also taking what was at the time a really complicated philosophical debate about, well, can machines ever think? And he, and he treated it like an engineer and he said, you know what, we just need to create an observable metric by which we can say that they're thinking and that's, and we don't have to deal with the philosophical, you know, discomfort of saying, well, can machines think and what would that mean if they are thinking, etc. If they pass the Turing test, then then they're on their way to thinking machines. And by doing that, you're not only sort of freed engineers from the philosophical angst and set them a kind of path to follow, which they certainly followed and what we kind of leads to Chachypeti today. But also, I think created this, this, this new way, he kind of, he kind of realized something very important, which is that we would not recognize machines as thinking until we started interacting with them in a human like way. And that when they started using language and and talking back to us, that's when we would see them recognize their thinking. And, you know, you could get very philosophical about this. You could say, you know, trees do a lot of thinking. But we don't think of them as thinking. There's a lot of other living things in this planet that, that think, but we, their intelligences don't interact with our own. And so we're not really that concerned about them or many of us are it. And so what Turing felt was in order for us to sort of really respect and use this word respect machines, they would need to sort of interact with us like this in the way they, the way Chachypeti interacts with us. But the danger of that, right, is that we then don't see, this gets back to something we talked about, we don't see the alien, the alien is of it. And we start to think, we start to interact with it maybe too much like a person or like a fellow human or a human like entity, a human thinking like entity. And thus we make very important cognitive mistakes in interacting with it. And we perhaps trusted or distrusted in the wrong ways. So, and this also happened in sort of this failure of imagination then or this, this kind of this way in which our imagination is channeled, we don't see how the models are vastly different year over year because we're interacting with a model right now. It's interacting with us. Yes, like the fastest human we've ever talked to, but it's still recognizable in its thinking in some ways. And it's, it's something we can respect but recognize. And so it's very, very difficult for us to then think, okay, wait a second, this is just the current iteration. We have to imagine a different kind of intelligence that this could grow into. And what would I be in that, in that situation, if I could say one more thing, it's, it reminds me honestly of being, you know, reporting in Ukraine or reporting in Afghanistan or reporting in South Sudan. And you talk to people who are in the middle of a war. And they say, you know, we knew the war was coming, but we just didn't imagine what it would feel like to be us when it was here. And they want me to know, you don't understand, I was just planning my daughter's wedding in that building over there, which is now like a bomb that like, and they still see, they still see the place that they were planning the wedding. They still are thinking and frustrated about the money they spent on, on the wedding invitations or something. You know, they haven't quite transitioned over from the old world to the new. And I don't know, I don't want to make this sound like a doomer forecast because I think the future could be quite bright, but it does take an active imagination, whether we think we're headed toward, you know, any of these kind of versions of the future to put ourselves in the new version of the future and to sort of play with, you know, our imagination and to imagine that the, you know, the world is not going to be the same as it is now. It's, so thank you for that. That was a really interesting answer that covered a lot of ground. So I can, you know, there's a lot of things we could talk about coming out of that. But one of the pieces I want to talk about is you know, you got me thinking that there's all this conversation about the future and what will happen in the next model. And we talked about this before. But, you know, the sense that the future is already here or, you know, the Arthur C. Clark quote about the futures here and not, you know, evenly distributed. And, and, you know, how many people out there right now are interacting with chatbots or with this technology in a way that would have been completely unimaginable to people, you know, only a few years ago and how quickly we are to, you know, what I think has been built into the design of these tools is human engagement as a design principle, if I can call it that, right? Like if you go back to the touring test, it's, you know, what is it? It's, it's the fact that thinking for us is measured in terms of interaction with us. And that's exactly how these things have been designed. They've been designed to, you know, flatter to create engagement to let people's guard down to continuously engage, right? Like, you know, one of the things I've noticed that chat GPT does is it always prompts you for like, Hey, how can we keep the conversation going? What do you need next for me? Right? It's almost like a, a meta-focation or a social media-focation of this. And where does that take us? You know, we're, we're having a conversation out of one side of our mouths about we, we need to be more cautious about the alien nature of this. And I'm, it kind of feels like we're watching that battle be lost. And is that? I don't know, I guess let me, let me kind of frame the, the discussion up this way. If we're, if we're worried about where that future direction is taking us, you know, do we have an obligation to push the, the technologists and the owners of these tools to put more, put more guardrails and principles in place that prevent people from, you know, becoming enamored, let's say, at the extreme end with these tools. Or is it more purely on the demand side? And we just have to do a better job of educating these people about, you know, what they're signing themselves up for. I think that's, I think the key question is, it's such an important question. I think that, well, a couple of quick things. One is that the metaphors we use to describe this do matter. And even if, even if, even if we're sort of taking the perspective of a business leader saying, okay, let's, let's, what's practical here? How can I use this to cut costs? How can I use this to maximize efficiency? Compete, even still our employees will be narrativeizing these, this technology. And in interacting with it in a way that, as you say, that, that kind of pulls the trigger on our very relational intelligence and our sense of self, which is so based on how we interact with others. So if we're interacting with, with the AI, and that's going to weigh, then our sense of self is, is disrupted, is, is, is affected. And we, even if, if the AI doesn't intend that or isn't trying to quote, unquote manipulate us, there's a French philosopher, Catherine Evans, who introduced me to a concept. I felt it was quite, I don't think she's published about this yet, but she said, you know, I guess you're probably familiar with, there's all kinds of UN rules about not anthropomorphizing intelligence, not anthropomorphizing technology and these go back some years. So she was creating a, I think, a comic book for kids about AI, but she ended up stumbling into this idea, since she wasn't allowed to create AI as a, as a, she wasn't allowed to anthropomorphize the AI because of UN rules, she was doing it for them. And also she wasn't allowed to have any antagonists. So the worst kind of narrative situation, you know, no conflict, no, no people, what do you tell a story? But she ended up coming up with this idea of AI as a place. And, you know, in the cartoon or in the graphic novel, YouTube content algorithm is a sort of a place and it's, you lead it with a map and it leads you down different, different, different, different, different recommendation portals. But I found, in my interaction with the different models, and again, this is, AI is not just about chatbots, we, I always feel like that's important to note that it is kind of useful, I think, to think about it as a sense of place, if only because it gets it what you're talking about, which is how are the norms and culture and cultural expectations of this place a little bit different. You know, how do I behave with the model that's not quite the way I would, that I was going to behave in person. We all deal with this, right? And social media behavior is different than we are in person with each other. And so I think if we think of AI in that same way, because it is programmed exactly as you said, it is programmed to be helpful, to be solicitous, to prove its value. You know, how 9000 in Stanley Kubrick's fantastic film is constantly talking about how foolproof it is before it murders the entire crew. And that's a sort of an important part is that these, the models are advertising themselves to us, much like an intern that wants to keep its job and get promoted will be constantly promoting itself. And we like to think that, oh, that's kind of helpful and that's quite culturally appropriate. Certainly, we want it to tell us other things it can do. There's nothing wrong with that per se, but it is a kind of different world. It's a different place that we step into. And yeah, I mean, I think barring, whether it's the supply side or demand side, I think maybe both solutions are important, but it's also about the metaphors we use. Well, on the, you know, on the demand side, the reason I ask this and it's something that I've been brushing up more and more against is this notion that it is providing a value to people, right? And I'll use the specific example because it's made, it's one of the most intimate. I don't know if it'll be the most intimate for long, but one of the more intimate ways people are using AI or using chatbots is as a therapist, right? And it becomes a way to process, you know, whether it's trauma or conflict, it's a way to have an intimate relationship. And I don't know, maybe even some ways get a better sense of self or a better sense of purpose. And it's working, right? Like, I've talked to enough people, some of them AI experts, some of them, you know, friends who say, well, I don't have a therapist and this gets me one and it's useful or I do have a therapist. You know what? AI is better than my therapist. And yeah, I don't know, like I just think about where this is going and what happens when you've created a technology that does provide this service and that people start saying, well, this is actually better for me than my human relationships. Like, where does that take us and what do we do with that? And, you know, what are the implications for us as a society where historically if you wanted a human relationship, you had to have it with a human and that was a propagating force for the continuation of the human race. Yeah, no, no, I think you're right. I mean, it's hard to parse out what is, I mean, morality shifts through across generations, right? We know this. And standards change, I think Esther Parell has made a very important point that therapy, AI therapy is thin. It's a thin kind of therapy, which is an interesting way of approaching this, which is to say that it's not a of challenging kind of therapy. It's not a one that's whole-bodied. It's thinner. And it she feels that it then maybe leads people to have thinner or have have lower expectations of human relationships. So, it's sort of lowers the standard, is it where? I think that's a, yeah, that's a concern. At the same time, though, I'm a little bit worried about saying that I'm worried about saying that I'm worried where where society is going, just because there are so many different stories. I mean, there are so many people that don't have access to therapy at all. And so many people for whom, I think, a first chat with JAPTBT or where any other model might be the gateway to a certain, not a self-understanding, it's not like everybody's in a situation where, oh, should I, should I call my therapist or should I, and perhaps they don't have health insurance, perhaps they don't have that access. So, I just think it's hard to generalize and say, we are going anywhere. It goes back to the quote you said, which is the future is unevenly distributed. Yeah, it's very, it's very difficult to know the sum effect of this on, on human society. But perhaps it does, it certainly enters, it certainly adds a different kinds of expectation of perhaps a lower one. To change gears slightly, I wanted to come back to something you said earlier about the culture of AI and, you know, culture is being kind of a component here and, you know, that being dynamic and looking different in different places. You know, you created the the Rough Translation podcast and you had a sub-series in there about at work and you looked at work across different cultures. And I'm curious, you know, whether we're talking about AI or whether or not. If we're talking about the future of work, how are you seeing people's relationship with work change? And, you know, as you did that series, you know, did you see market differences across cultures or was there kind of a common core of the way people approach work? No, thank you for that question. Yeah, you know, just to highlight maybe two things I learned from that series. So one, we had one, one show in the, I think it was, it might have been called Failure as a four letter word. That would maybe that was a thrown out title, but it was about how this this concept of fail fast, which we think of associate with Silicon Valley just does not translate well into other, and to everywhere in the world, you know, we talked to somebody in Nigeria who said, you know, what about fail slow? Everything takes forever here. Or there's so much bureaucracy or perhaps you live in a culture or somebody else was from Mexico City. They said, you know, failing is such a taboo that once you fail, you never want to even show your face again. So fail fast doesn't, isn't as accessible. And yet, because we live in a digital, globalized culture, fail fast and Silicon Valley and inspiring entrepreneurial stories, we're very much part of the water. So one of the things that I wanted to explore in that work series was how do you square what you're reading and seeing on YouTube and inspired by with your own local constraints? How do people translate that entrepreneurial spirit? You know, and the idea was not to say, oh, well, it's more difficult to be an entrepreneur in Mexico City or Nigeria. I mean, perhaps that's true, but in some ways, actually, that's not true. In some ways, the opposite is true. So nothing's nothing's nothing's black and white. But to me, it was it was about finding space to give permission to people who may not have swallowed that fail fast mantra, who maybe feel alienated by it, to find a home in it. You know, how do they find their own way to it, which has always been my life's work, honestly, as a journalist, is to try to focus on these mistranslations, focus on the ways in which, you know, some advice or some, or some sort of piece of culture might feel foreign or inaccessible. But how can we find the commonalities? How can we sort of all be, participate in some ways in the global economy, even from our different cultural angles? And what role does where we're from affect what we think of as good or how we approach the question? And then one more example from that series and come around to it is looking at, well, there's a particular law in, in Portugal. I remember at the time being reading about it, it said that if your boss calls you after hours, or the boss is not allowed to call you or email you after hours, or otherwise they would get a $10,000 fine, or $10,000, $10,000 euro fine, sorry. And what we discovered was that this law, which seemed to be quite a, you know, lovely laws, if you're a worker suddenly you don't get called by your boss, was actually quite a cynical ploy by the Minister of Labor, which formerly was the Minister of Tourism, to sort of push Portugal as a place for work-life balance. And because she knew that nobody who was coming to Portugal would ever kind of be affected by that law, I was fine to just kind of pass this law, and to create this illusion of difference, this illusion that Portugal was this space that we could go to that was a, that had their stuff in order that figured out something that for example the US companies hadn't figured out, which is work-life balance. And so the cynicism though that then created then for people in Portugal was quite profound because they felt that these laws were then just created for the outsiders. And there was a second kind of, there was, it weren't created for them. And so what do we learn from these two stories? Well, I think that we're living in a time where work is global, where work advice, work people work across borders, and we people have international teams. And yet those teams are all dealing with things that are not only because of their own cultural point of view, but just because of the role of geography, the role of the role of societal expectations, that even though people are, under, speaking increasingly, you know, global English, these differences really do matter. And I think this directly translates so I would sum it up, sum it up the theory of the future of work series as an exploration into how, into specific stories into how even as teams are global and work is happening more internationally, the fault lines, you know, between cultures and between societies really are becoming even more exploited and mean more and are more painful to people because they see the differences. You know, it's, it's, I saw this in my years living in Nairobi going back and forth to East Africa, how people were so much more aware of their status in relationship to their age mates and other places because of the internet. And so what does this mean then for AI? And we haven't talked about China, but for me, the, the, the, the kind of story that's being told within China about AI and the story that China is telling the world about AI is so different and it's so important that we realize this because we're in this battle with with China or AI companies are in this battle with China, this race and yet China has its own specific priorities and its own kind of narrative about AI within within within the country that affects the workers and and I think we'll become increasingly important as as the AI raised heats up. Let's dive a little bit deeper into that. So what what is the narrative there within China and you know, how is it the same or different from what's being projected outwardly as it comes to AI? Sure, sure. So I mean, I think basically, you know, for a long time we were told that AI that Chinese firms were more interested in sort of practical or applied AI whereas the US has it has it has it made a more clear stated goal of superintelligence. I think many people that I talked to did not believe that and indeed Alibaba became the first major Chinese tech giant to openly discuss artificial general intelligence and even superintelligence. So we know that China's has just as much ambition in the in the area of superintelligence but importantly, the nationalist ambitions and the technological ambitions are in somewhat in our in our in contradiction there because or at least not only is aligned, I was talking to a safety researcher who would mention that in China, you know, red teaming this kind of safety testing of the models, it's much more rigorous in Chinese than it is in English. So you could ask the the model, for instance, like moonshots, Kimi K2, which apparently is it's thinking is outperformed open AI's chat GPT-5 as well as anthropics, quads, latest models. So okay, so you take this model and you can get it to do things in English that it would not do in in in Mandarin Chinese or in Chinese language. So what that means, of course, is that the red teaming is is very much about political control. It's about making sure that people are not asking questions of this AI that will destabilize or work against the Chinese government but it's not exactly the same thing as a safe model. And the whole way China got into the AI race is I think also instructive in this way where where AlphaGo deep-mind or deep-seek rather the sorry deep-mind the the Demisys-Opuses model created a created a go player in 2016 that then beat the Chinese national champion is that's when China became interested in go or interested in AI when it kind of hit home. You know, it hit a game that the China reveres and is a is an ancient Chinese game. So this has always been about I think the turf war about what is happening within China how in terms of China's concern of controlling its people China's concern in preserving its own sort of territory that that is why the the AI race is being is being had. So what that means I think for us is that has these models become smarter there's nobody who is concerned with making them safer in a complete way that the goals of winning winning the AI race are supersede the goal of creating a model that keeps us all safe. And even though we should we should imagine that China doesn't want the US to get a super intelligence the US doesn't want to try and to get a super intelligence there should be you know when you think about nuclear disarmament there should be some sort of agreement between rivals that's possible here just as there has been in nuclear arm trees armistreaties. And yet despite the fact that I know many good diplomats diplomacy these days as seen as a is a dead end and so we're all racing on our own sides. Yeah well while and it's you know if you're a student of history it's a bit concerning that it seems like with most of these technologies you know nuclear arms included the technology comes first and the safeguards come later right like it's we've got nuclear you know non-proliferation treaties that came after we deployed nuclear weapons there's the you know that the story making rounds about the the gap in years between when the first you know assembly line car was made versus when seat belts were introduced and you know I think you hear a lot of the dooms say we may not have that luxury with AI right like it's again to come back to the Yudkowski you know good piece of brand marketing if if anybody builds it you know everybody dies so that to me is one of the exam questions here is is this time fundamentally different or or is it the same and how do we how do we grapple with that. Yeah it's interesting because at the AI summit and so which I think was just 2024 it's at that summit that 15 leading AI companies committed to quote you know defining the intolerable risks and agreeing to not deploy which one person said to me that's the only time trillion dollar companies that agree to literally deploy a product if it's not safe right so in some sense what's also unusual about this industry is that you know unlike the car companies that needed Ralph Nader to to shout about seat belts for quite a long time and a lot of people to die before seat belts were even included and you know similarly you know you can look at other industries with their whistleblowers and their and their dadflies that they know AI companies from the get go have been talking about AI safety so it's it's they've actually been the main people talking about it so so in some sense you could say well they are ahead of the curve in in safety the problem with that I think is that well first of all the problem we pointed out before which is that they don't actually know what the models are capable of until they release them so there's that unknowability in this technology the unpredictability that's just part of the that's part of the way in which AI is constructed but also you know not safe defining the intolerable risks is not an easy thing I mean no no technology in the world is completely devoid of risk you might be somebody who enjoys you know hang gliding I might be somebody who is scared even when I go in the back seat of a car when I when I don't know seat belt I mean there we have different different standards of risk and so this is what I think is so important about for instance your show and this kind of conversation is to dig deeper into what we mean by not safe because that's kind of where a lot of the discussion ends it says oh my gosh these things might destroy us and then it stops there but but in fact you know if you look at for example I mentioned soul the the agreements that came out of soul so I think that was like a 500 word agreement you know that just a statement it was a kind of an open letter and then after that the companies issued maybe a thousand word mission statements documents basically saying how they defined intolerable risks so there was some effort to defining it but I was talking to somebody and they said you know yeah about five five thousand words would be better 10,000 words would be even better than that I mean the more granular that we can get these companies to be about well how do you define risky how do you what specifically do we want to see in the model and the training and the pre-training requirements and the conditions for deployment you know specifics before this thing gets deployed I would say the details matter and unfortunately maybe because it's our relationship to technology I would not say that about your listeners but you know I think just our society more broadly we've just kind of come come into this idea that oh technology will either work or not work you know it'll either glitch or or it'll function and the best technology is invisible and maybe yeah maybe we can't do that with this maybe we have to kind of get nerdy and get into the details because I'm agreeing with you I I'm just by nature I can't subscribe to utopia but I also just can't subscribe to apocalypse maybe this is my failure of imagination but but what I do think is is that I have to do the work of reading about back propagation just to just to understand you know what how alien these intelligences are and thus maybe understand a little bit more about what kind of requirements I would want to see before the models are released right well well and you know it's it's great to have everybody sign off on this and then I don't want to dismiss that because that's a win as you said that's something that's that's very unusual for technology companies but you know to what degree can we actually create any sort of enforcement mechanisms here right because that's that's all of this comes down and you said it right off the top there's like a trust component here and there's the fact that the future of civilization is concentrated in the hands of a bunch of guys who may or may not be in a group chat together and what happens when Sam says you know oh yeah we're not going to do that and then says his team let's make sure we do that anyway you know like it's just it's just a wild I don't know it's certainly beyond my imagination that we could be that we could get here yeah yeah no absolutely and and it has been my kind of mission through the through the whole show and it's not been quite not easy in this in this reporting is just to be constantly thinking what is the role of a person who is not a not a technologist not a lawmaker what is their role other than to just sit back and watch this future happen because clearly if you're just a person if you're a parent if you're if you care about the world it doesn't feel acceptable to just sit with this level of risk and do nothing but it also feels premature or perhaps histrionic or I don't know just to freak out and some and you know we I had this conversation with a number of people even somebody who was reviewing and recommending the podcast and they said you know well I had to give it a mixed review because I was also freaked out and so I said you know that doesn't mean you shouldn't listen but it's but I got their point which is that I think it is important for us and you and I and anybody who's has the opportunity to talk to anybody about this stuff is to think about how how we how we walk the tightrope that I guess all profits and and and biblical figures have walked before us which is how do you warn people without making them unhealthily anxious and and you know I see the that we in the in the podcast we we talk about these three groups we talk about the doomsr with accelerationists we also talk about this third group called the scouts and the scouts kind of believe essentially in a nutshell that this win win opportunities possible that the more AI safety is absolutely necessary but that we shouldn't just stop AI but I I see them struggling as any centrist kind of struggles where you don't have a clear message this is amazing or this is horrible and you're trying to explain to people know this is this is this could be really good um but we have to do we have to understand the models a little bit more and because it's not easy to say this could be good but we certainly needed a regulation that will do X because there's it's not clear that there's any regulation that the super intelligence or or just in advance day I will not outwit this is where I think in answer to that question more people need to be involved in the problem more people need to be scenario planning for what let's say it never happens who knows let's say 10 years 20 years everything looks exactly the same as it does now then fine then we will have done a mental exercise for no reason okay but let's say there is a chance let's say it's a good chance that things look radically different in 10 or 20 years perhaps some scenario planning now as to how universities might change how schools might change how parenting might change how how the community living or sort of a communal relationships might change with a super intelligence or with an advanced AI that is doing most of the human labor um that's a that's a question that we could play out that we could we could worry in fact I'm you know if anybody who wants to contact me with with some scenario planning ideas I I'm working on this right now so a question uh and and it's very important if you're doing scenario planning to not be freaked out in the in the sense of that word not be panicked but also not be uh not take a cavalier attitude and um and maybe we could figure out some new solutions that would work even if we don't get to advance AI that's my optimist talking no I I love that and I love that as a mission for you know the the podcast in general and frankly the journalistic mission of it all and I think it's you know yeah I agree that it's super super important to just sort of pivot that question a little bit what when we think about scenario planning and we think about you know what we need to know and what we need to do differently to build the future we want what's your advice for you know business leaders or government leaders you know what in the organizational side of government yeah outside of Silicon Valley like for the people who are looking at adopting this technology looking to you know figure out what what they need to do differently to be successful as people and as organizations what what should be on their radar and what guidance would you give them yeah I appreciate it um one is I think you know one of the feet the feedback that I've gotten from this from the from working on the series I've talked to people who feel that their companies and these are these tend to be say um middle level decision makers or even people who don't feel like that they're the key decision maker they're just under the CEO they feel that their company is either either moving too fast or being left behind right this is the this is always the story the moving too fast um goes with they're throwing out human intelligence they're trying to replace they're trying to automate everything and the we're not moving fast enough saying we're going to be left behind we're we're we're we're sort of stuck and so I'm sure the business leaders are feeling that pressure and having to make decisions all the time about the pace of adoption right and it's extremely difficult to make a decision about the pace of adoption of something that keeps changing that keeps developing and getting smarter um because how do you make that decision so one thing that I think is helpful or two things that I think has been helpful for me in talking to those people and in some sense a laying their concerns but also addressing um addressing what is the elephant in the room which is what will my life look like what will my industry look like on the other side of this technology the two things I always say is first the the the I think I I told the story before but the the fact that a chess computer with a human a human and a computer and an AI is smarter is better than a and then um any AI or any human at least currently and I think that's going to be true for a while so what uh I think the smart approach is to think about how do we enhance how do we you know supercharge the work of the of our of our of our of our employees how do we get them to do not only more but to to to think deeper and to make more interesting decisions to do use the predictive power of AI to make decisions not only for for now but to do more sophisticated planning and so I think that that's actually addresses both concerns that we're not moving fast enough as well as we're not uh we're moving to uh to uh to uh we're moving too rapidly in that humans our humans our our employees need to feel empowered they need to feel that they are getting smarter because of this AI and it gets it really the second point I would say which is that the story there's so many stories from sci-fi that are not only just in our heads but baked into these models I mean even the way for even people haven't seen say Stanley Kubrick's um masterpiece uh 2001 Space Odyssey or they haven't seen a blade runner or the matrix even if they haven't seen any of those films um even the way that AI is so solicitous and the way it is encouraging and you talked about this earlier it begs a kind of narrative it it makes you think of a story where you know like an Isaac Asimov kind of narrative where this uh this servant is quite helpful until they're not um um and Jeffrey Hinton I would say says something so smart about this uh he says that most CEOs you know most leaders are used to not being the most intelligent person in the room right if they were the most intelligent person in the room they're probably not a good leader because they need to hire the smarter people because they hold the vision for the company and they've hold the the you know the they are the leadership the leadership shouldn't be the best at every task um and so in some sense he says that these models have been designed by very smart people who are used to even smarter people working for them and they are not threatened by intelligence in the way that say the employees in a company might be threatened by the arrival of this incredibly capable infinitely knowledgeable um the machine and technology that doesn't need to sleep and doesn't need to eat and and and and never and never forgets anything so Jeffrey Hinton uses that as an analogy to say that that in some sense the CEOs of these companies aren't aren't worried enough they they're used to they they can dream of super intelligence and still imagine that the super intelligence will do their bidding just as their employees do their bidding because they they haven't really appreciated the fact that that a super intelligence is much much much much much smarter than any you know Einstein level person that they hire but I think the the the takeaway for me for for business leaders who are adopting AI is to watch and take care for the narratives that uh that that are often indis-disharmony this non-alignment between the C-suite and with the workshop floor the the rest of the employees that um our relationship to an intelligence is going to be different in the C-suite and is going to be different among the workers and that people need to feel even as this is making the company better or more efficient that it's also making the humans smarter and more capable and and even happier so that's that's more around the narrative than it is around the adoption but I think that it's it's kind of can guide all the adoption decisions if that's if that makes sense it makes complete sense and I I love that guidance and I think it's it's so important these days I mean it's and and you read about it everywhere and it's something we've had people talk about on the show of you know you've got this kind of two-speed or or kind of um you know narratives in conflict of CEOs saying oh this is going to make my company so much better and frontline workers say oh by you know putting me out of a job and if you can't marry those if you're an organizational leader and you can't get the people that report to you to be excited about this it it feels like it's going to be very difficult to successfully you know rally the organization and build something uh it's going to be worth doing I agree and I and I think also we may see this play out politically as well I mean currently um I don't think that AI is a is a political issue yet um there's not a clear democratic or republican position on AI but that I think could quickly change and we could see um kind of people rallying around this becoming as polarized an issue as any other perhaps we could we could theorize about what might trigger that but it's not hard to imagine for example in our polarized climate um a stop AI camp and a AI's great camp or whatever and that would be very disappointing that would be very disappointed because polarization is is really the the killer of anything complex anything complicated to discuss and to think about clearly I mean you and I are having a discussion where neither of us are in the say the stop everything camp or the accept it all utopia camp but in order to talk about the fine print of these models I would say to use our alien analogy if this becomes politicized then the alien definitely wins well said well said Greg with five to a few times now about uh this conversation has been pretty heavily focused on chatbots and I think that makes sense given that the the technology is here and it's it's very clear how it's impacting us right now but you know one of the in some ways you know cardinal sins of AI is conflating chatbots or just you know gendered AI with everything possible in the world of algorithms and these advanced technologies we've talked briefly about agentic um as we look beyond that as we look at some of the other big buckets of uh you know technological capabilities that are either here or emerging what are some of the ones that through these conversations have gotten your attention and maybe what are some of the ones that you think are a little bit less likely to have an impact yeah I'm so glad you asked that question because it does feel like um as you said we do we do make the mistake I often make this mistake of thinking that the chatbot is the AI but in fact the chatbot is just uh it points us to the the artificial intelligence underneath and and AI is being used in very different ways and I think also just to understand why frontier AI companies would be kind of barreling toward this future of super intelligence is easier to understand when we look beyond the chatbot for example um I mean just to give like a couple of quick examples um there's a company that I spoke with which is um using AI to to very quickly in an automated way harvest um cells from your own body so that you if you need let's say um a liver replacement it will be harvesting and and growing cells this is a very onerous process you have to find the exact right kind of cell that's healthy and then you have to encourage to grow and divide and kill or kill the other cells so you don't get some kind of disease thing doing this outside of the human body is very difficult it's something that it's a kind of pattern recognition that AI is extremely good at so um the vision of this particular company called Seleno but there are other companies like this is to uh I mean essentially have a future of medicine where we go to the hospital and we have a kind of cassette with our organs there on the on the cassette and perhaps if we need if we need a maybe in the future if we need a whole organ or currently maybe if we need you know some some tissue then we we don't have to use a donor we can use ourselves as the donor so that's just one example I think of how the very same tools that we're used to like pattern recognition making making these decisions but but doing it quickly um could transform healthcare another example that I always had um I think about is a particular crown created by a company uh French company Olivier Ulié is the is the scientist behind it and he's created an AI crown that will essentially read your thoughts so if we think of say neurotech or other brain computer interfaces this does not involve any drilling into the brain or any drilling into the skull you don't take actually stick something in there rather it's uh it it it um it works outside the head you just plop it on your head like a like a headset and a paraplegic man in was able to use this headset to control um not only amounts on the screen but actually to drive a Formula One race car on a real racetrack and uh using only his mind what's crazy about this story and the other stories is that this technology it's not futuristic it's actually exist I mean there is a real person who is driving a Formula One racing car using only his mind with just a headset that sat on his head and yet for that future to be accessible or be available to the rest of us you need a lot more data but that's always the question with with with every AI thing it's like whoa where's the data um I mean we we i'm sure you've talked to many people about that but in this case the data is our own brainwaves right so many many many of us would have to volunteer up our brainwaves for the AI to learn enough enough to have enough data essentially to to work with to be able to be not I don't want to I don't want to make it sound like it's just an out of the box throwing the headset and drive a car with no hands I mean I'm sure there's some training involved but to make that even a possible future where if I have had an injury I'm able to stick a thing on my head and then function as I was before while I recover or perhaps that's my new future that is the sort of vision of AI that I would say a lot of my humanitarian friends say we we cannot cannot come fast enough there's so many people that could could benefit from that from a point of view of accessibility or longevity or healthcare so yes so I definitely think that that's there as for things that may not have as much of an impact I guess that's that's everything else I would not say it's really hard to know I mean there's so many new AI companies coming up every day it's hard to know what's going to be what's going to be you know the wheat and what's going to be the chap but somehow it's going to be yeah I think there's some radical changes coming up. Well I love the examples you chose not not just because they're so positive but because there's such a there's such a radical departure from where we got caught up before talking about like you know AI as this you know alien intelligence right that this is not AI is as an alien intelligence this is very much you know basically a complexity engine that helps us you know serve humans better it helps us personalize medicine personalized care and just create tools that help us you know live healthier more fulfilling lives which is which is inspirational and it's just such a completely different vision from yeah it's trying to manipulate me or you know build SkyNad. It's actually true and it's that's why it's hard for me to understand the debate around the AI bubble because the bubble question has so much to do with valuation rather than value you know and so while and this is something I've never again when the mysteries of the stock market has had something can be valuable but still over overvalued but it's not that AI is is this promising future thing that once it crosses a certain threshold then it will be incredibly powerful and change our world the technology of current AI that exists is is already quite amazing and it goes far beyond the fact that you know the the Cheshire PT can say like write a write a pretty decent short story or or legal report nevertheless there's a contract with with the AI that's necessary there's there's obviously data centers that are being built there are there's a question of data there's a question of regulation so that's why you know to I think one of our themes throughout this whole conversation has been well what what is my role what is what is our role collectively in shaping the future of AI if we're not the head of a frontier AI company if they're not a lawmaker if we are a decision maker in a company but again not somebody who can create a new model but just decide whether to adopt it or not I think there's going to be so many questions in the next few years in terms of whether we're running a factory how much data gets used how is that data get used if we're even just a patient in health care whether we give up our data and many of us are already giving up our data but having more understanding about how that data is being used that will contribute to whether AI has a shaping force on our world in a different least different domains you know it was interesting Greg what you said earlier about you know journalists and about the program you know suffering by not by scaring people too much and and to me it's almost like a journalistic responsibility to be scaring people right like I you you in some ways and I feel the same way here that there's an obligation to actually tell people the facts and what's going on right and and one of the you know downfalls or issues with journalism these days it is this sort of social media effication or this algorithmic you know content lens where you just tell people what they want to hear and people say yes you know validate me tell me what I think is right versus actually provide me with the facts and tell me something that that's important that I'm educated on you know versus just you know that the same old thing that I already know yeah I mean it's it's such a good point I mean it reminds me of my days as an international correspondent and sort of being say in Afghanistan and hearing the gripes of other correspondents they say you know gosh this is a whole war and people back at home are just not interested anymore it's fallen off the news and I always saw it differently I always thought no no my job is to make you care and to I don't know if you call that entertain or engage you but I was going to find some kind of angle that was going to make this feel relevant to you so I do think that as voices as as I feel like we do have a role we do have a job to make this feel relevant that probably means don't freak people out right away because because then they'll just feel small as opposed to empowered but I think that you can take that too far and we are in a situation now where there's this incredibly important technology it's complicated the complexity is interesting and is is maybe worth spending some time thinking about but this is why it feels like you and I mean and we need to find these new new new narratives not just the sci-fi narrative not just the you know apocalyptic doom and gloom I think Ellie is your kowski is I don't know if a titling the book if anyone builds it everyone dies it's blunt and maybe it gets more readers but there's there must be a role for sitting down with a listener or reader and saying okay here are some incredibly fascinating things about AI they didn't know it here's some ways they could go wrong and here's ways in which the the world might play out that might feel radically different than than you may think it feels here's how your kids will fare you know it's like we have to address the questions that people have and not just leave them with um you know leave them with a scare story and so I struggle with it you hear me struggling within this answer because I I do think our our role as information gatherers is to package package that information in a way that people will will want to consume I mean it's we're not like professors here with a with a captured audience people can choose what they want to tune into so we have to we have to sing and dance for our sufferers aware but at the same time it's the complexity of it and the the fear I think that audience is have of that complexity and I think even the fear that journalists have of a complexity I would say that you're very much an exception to this but you know a fear of say oh gosh I don't want to sound stupid because I'm talking about this stuff that's computer science related and I didn't really feel I mean you know uh yes there's a lot that makes us feel dumb when we're trying to understand how a something like artificial intelligence works but just being willing to ask those questions and being willing to dive into what is red teaming look like what is safety training look like what what what might a model take to be controlled if we can get more people to have these conversations without feeling imperiled you know either physically or or economically imperiled that will be a win right now I love that it it makes it makes complete sense and it's it's a it's a noble calling as well well thanks so much for this encouragement I really appreciate Jeff I appreciate that I know absolutely keep keep doing what you're doing and you know we'll get we'll get out there uh one person at a time absolutely absolutely if you work in IT infotech research group is a name you need to know no matter what your needs are infotech has you covered AI strategy covered disaster recovery covered vendor negotiation covered infotech supports you with the best practice research and a team of analyst standing by ready to help you tackle your toughest challenges check it out at the link below and don't forget to like and subscribe