Digital Disruption with Geoff Nielson

How AI Will Save Humanity: Creator of The Last Invention Explains

81 min
Nov 24, 20255 months ago
Listen to Episode
Summary

Andy Mills, producer of The Last Invention podcast, discusses three major camps in AI debate: doomers who fear existential risk, accelerationists who see AGI as salvation, and scouts who believe preparation is critical. The conversation explores how leading AI figures shifted from warning about existential risks to racing to build AGI, and examines the implications for jobs, society, and humanity's future.

Insights
  • AI leaders (Altman, Amodei, Hassabis, Musk) have genuinely shifted from doomer positions to acceleration, believing they must build AGI first to ensure safety—not cynical marketing but sincere belief in 'acceleration as salvation'
  • The debate over AI has not yet become politically polarized, creating a rare window for genuine public discourse before it becomes a left-right issue
  • LLMs are to AI what websites were to the internet—the chatbot interface masks the real transformative technology underneath that companies are investing billions to develop
  • Journalism's crisis of trust predates AI; the industry's vulnerability to disruption stems from social media's damage, not AI itself, creating opportunity for trustworthy alternatives
  • The timeline for transformative change has compressed dramatically—from centuries (cavemen to modern era) to decades (Wright brothers to WWII aircraft), raising urgency around AI governance
Trends
AI safety concerns moving from academic fringe to mainstream: bestselling books, bipartisan political interest, and media coverage legitimizing existential risk discussionsConsolidation of AI leadership around small group of competing founders (Altman, Amodei, Hassabis, Musk) racing for AGI, with internal group chats dissolving by 2025Shift from regulation-seeking (2015-2022) to regulation-avoiding (2023-2025) messaging as AI companies gain political leverage and investment momentumIntegration of AI into foundational business operations accelerating faster than previous technologies, making rollback increasingly difficultEmergence of 'scouts' as pragmatic middle position: accepting AGI is coming but demanding immediate preparation on economics, politics, and safetyAI replacing human conversation and phone calls as primary intellectual engagement, with potential long-term isolation and social implicationsDoomers improving argument quality and accessibility (moving beyond 'paperclip maximizer' thought experiments) while accelerationists lack comparable counter-narrative investmentBipartisan concern about AI existential risk (Steve Bannon, Megan McArdle) indicating issue hasn't yet become partisan despite polarization of other tech issues
Topics
AI Existential Risk and SafetyArtificial General Intelligence (AGI) Development TimelineAI Regulation and GovernanceEconomic Disruption from AI AutomationAI Leadership and Competitive DynamicsDoomer vs Accelerationist DebateAI Hallucinations and ReliabilityMedia Trust and Journalism's FutureSocial Media vs AI as Information SourceAI Consciousness and Merger with HumanityJob Market Disruption and Economic PolicyAI Safety Research and InvestmentLarge Language Models (LLMs) CapabilitiesPolitical Polarization and AI DiscourseHistorical Technology Adoption Timelines
Companies
OpenAI
Led by Sam Altman; created ChatGPT; shifted from safety-focused to acceleration-focused; subject of internal departur...
Anthropic
Founded by Dario Amodei after leaving OpenAI; competing AI company with emphasis on safety; represents fragmentation ...
DeepMind
Founded by Demis Hassabis and Shane Legg; acquired by Google; pursuing AGI development; represents academic AI resear...
Google
Acquired DeepMind; major AI investor competing with OpenAI and other players in AGI race
New York Times
Andy Mills' former employer; discussed as example of journalism losing trust through partisan bias and social media i...
The Daily
New York Times podcast co-founded by Andy Mills; example of narrative journalism format
X AI
Elon Musk's AI company; competing in AGI race; subject of lawsuit against OpenAI leadership
Singularity Institute
Early AI safety organization; received investment from Peter Thiel; associated with Eliezer Yudkowsky
Longview
Media company founded by Andy Mills and Matt Stauffer; focused on long-form context and historical perspective on cur...
InfoTech Research Group
Sponsor providing AI strategy, disaster recovery, and vendor negotiation research and analyst support
People
Andy Mills
Host and primary speaker; producer of The Last Invention podcast; covers AI debate and existential risk with journali...
Geoff Nielson
Host of Digital Disruption podcast; interviewer engaging Mills on AI camps and implications
Sam Altman
Shifted from AI safety advocate (2015) to leading AGI race; testified to Congress requesting regulation; now focused ...
Dario Amodei
Left OpenAI to found Anthropic; influential in ChatGPT development; represents safety-focused faction in AI competition
Demis Hassabis
Co-founded DeepMind in 2010 with vision to create AGI; pitched artificial general intelligence to investors, not prod...
Elon Musk
Early AI safety advocate; left OpenAI; now competing with Altman; only major figure openly attacking competitors in A...
Ilya Sutskever
Left OpenAI; part of original safety-focused coalition; represents internal departures from leading AI companies
Eliezer Yudkowsky
Leading AI doomer voice; warned about existential risk since 2013; co-author of 'If Anyone Builds It, Everyone Dies'
Geoffrey Hinton
Lifelong AI believer since 1972; now publicly warns of existential risk; exemplifies shift from acceleration to cauti...
Peter Thiel
Early investor in DeepMind, OpenAI, and Singularity Institute; advocates for acceleration and against 'safetyism' min...
Alan Turing
1940s pioneer who envisioned thinking machines and predicted AI takeover; foundational figure in AGI concept history
Max Tegmark
Organized 2015 Puerto Rico conference bringing together AI leaders and safety researchers to align on AGI concerns
Nick Bostrom
Early AI safety advocate; attended Tegmark's 2015 conference; evolved from doomer toward scout position
Ed Zitron
Skeptic of AI hype; argues LLMs cannot deliver on promises; represents contrarian voice in AI debate
Alex Williams
Socialist accelerationist arguing AI liberation from toilsome jobs is worth pursuing despite risks
Nate Soares
Co-author of 'If Anyone Builds It, Everyone Dies'; represents doomer camp gaining mainstream attention
Stanley Kubrick
Created 2001: A Space Odyssey (1968) in collaboration with AI scientists; influenced sci-fi framing of AI risks
Marvin Minsky
Early MIT AI lab founder; part of 1960s AI optimism that led to AI winter when promises weren't met
John McCarthy
Early AI researcher; founded MIT AI lab; part of 1960s optimism about achieving AI in decades
Claude Shannon
Early AI researcher; benefited from space race funding that supported AI research in 1960s
Quotes
"They think that that has is cause for so much alarm that we need to stop. We've been trying to get us to stop right now before we go any further."
Andy MillsEarly in episode discussing AI doomers
"A more intelligent species is rarely controlled by the desires and the wishes of a less intelligent species."
Andy MillsDiscussing existential risk argument
"Acceleration is salvation, that acceleration is a duty that they have on behalf of the human race."
Andy MillsExplaining AI leaders' genuine belief in acceleration
"The chatbot is to the AI, what the website is to the internet."
Andy MillsExplaining LLM significance
"We are all riding on that conversion, you know, and we've yet to engage in a big, robust public debate about whether or not this is the right path."
Andy MillsOn economy's dependence on AI acceleration
"If we literally came up with a technology that did that work for them, it would possibly be like the most powerful liberating force for like the betterment of humanity in human history."
Andy Mills (quoting Alex Williams)On AI eliminating toilsome jobs
Full Transcript
Hey everyone, I'm super excited to be sitting down with Andy Mills, co-founder of the New York Times The Daily Podcast and producer of tech podcasts like Rabbit Hole and The Last Invention. Andy is a world-class storyteller and has been dedicating his time to talking to the greatest minds in AI so that he can help us better understand what the technology is capable of and where it's going next. I want to ask him which vision of the AI-driven future he finds most convincing, who he thinks will be the winners and losers, and what we need to do to be ready. Let's find out. I am here with Andy Mills. Andy, you're the producer on The Last Invention podcast. And one of the things that you know as well as I do when you're making an AI podcast is you hear from all sorts of camps, if I can call them that, of people who have completely different views for the outlook of how AI is going to transform or not transform our society. And so I wanted to ask you, maybe you can lay out what are the main camps that you've seen and based on hearing arguments across all of those, where do you see yourself sitting? Yeah. Well, thanks for having me. Thanks for the question. My favorite subject to cover as a journalist is a debate. It is something very attractive to me about trying to understand in good faith why intelligent people come to such different conclusions when looking at the same material. And I had known that there was a contingence inside of the world of artificial intelligence that was really, really worried about it for many years. Like, Eliezer Yudkowski podcast interviews in 2013 or something is when I first realized that there was this almost like biblical prophet voice out there saying that the sci fi movies are kind of true. And we really need to get ready, we need to get prepared for this. And after chat, GPT blew up, I started to increasingly run into essentially the opposite side of that debate, which are these people who we often call the accelerationists who believe that AGI, this artificial general intelligence point that they believe is coming, it could be the best thing that ever happened to us. And so I was attracted right away to the people who have those strongly opposing views inside the same world. But the more that I dug into it, the more I realized that everyone in the AI world had different camps. So literally, there's like eight or nine different ways you could categorize the debate that's happening inside the technology world about what we should do with artificial intelligence. And we ended up in the podcast narrowing them down to three basic camps, the camps that I think are most influential in this conversation in the moment that we're having. Camp number one is the, you know, AI doomer, essentially the people who think that the risks of the AI race, as we are conducting that race today, are so great. And include the fact that we may create something smarter than the smarter than us that ends up leading to our own extinction. They think that that has is cause for so much alarm that we need to stop. We've been trying to get us to stop right now before we go any further. Then on the far end, you have the AI accelerationists to say that the fears have been overblown, that the benefits of this and the way that this might help us out of the stagnation that we're in. I mean, some of them will even tell you this malaise, this essentially nihilistic streak that's spreading from our politics or social media, like almost all of that is going to be positively affected by the discovery and the investment in a true AGI. And then there's this camp that's kind of in the middle, but they're not like a medium ground between the two, they're their own place on the map. And I call them the scouts. And they're the people who think it's probably too late to stop. So the doomers are right to be afraid, but we're not going to stop. And maybe we shouldn't stop because the accelerationists are right that this could be like fire, like electricity, like a true turning point in human history. But they believe that the risks are real. And so we need to do everything we can to get ready for what's coming. And that's on the economy. What will we do if the job market starts to fall away? If the job market goes away completely, what should we do with our politics? What kind of tests, what kind of regulations should we put in place? And they are trying to shout as loud as they can, that we can't wait five years, like we have to start getting ready right now. Journalists, universities, think tanks, like we need to turn our efforts into solving the problems that stand between now and the creation of this AGI. So those are the three main camps that we talked to. Obviously, there are camps like the skeptics that are out there as well. And we are going to follow up with them down the road. But I just I think that we're living in a time where the skeptics are not really a forceful shape of the conversation happening closest to the technology. I want to come back to skeptics in a minute because I've got lots to say about the skeptics. But before I do, across the, it sounds a little bit like a spectrum where you've got the scouts, maybe in the middle or more, maybe taking aspects of both camps and having more of a practical lens on it. But listening to all these disparate voices, is that where you see yourself, Andy? Are you more sort of in the scout camp? Or if I had to ask you to put your flag down, where do you think it would go? Well, I am the kind of journalist who wants to remove myself to understand it all better. I wouldn't be investing so much time and effort if I didn't think all three camps have earned our attention and our time. I think they all have an incredible case to make. And I want to help people find their own place in this debate and join this debate because I think the time has come that we join this debate. And to do that, you really have to suspend your own biases to help each camp put their best foot forward. But I will say I'm a person and most of my personal circle were doomers. Some of them have moved into scouts. There was definitely an anti-accelerationist bias in my personal circle when I started. And that's broken down. I find that the accelerationists do have a really compelling case. And so I think I was a little bit more doomer scout at, let's say, maybe April, May, when I was really diving in deeper into this. But by now, I truly can see a world where all three camps get what they are. I can understand the vision of the future that all three camps are painting and find all of them compelling enough that, and of course, all of them know far more about this technology than I do, that I just have decided that for now I'm keeping a complete open mind as new information is going to be coming in over the next several years. Right. That's fair. And it's interesting to hear about your sort of personal shift, maybe coinciding with a societal shift, maybe not. But I wanted to touch on this notion of the anti-accelerationist. And one of the things you said that was interesting about the scout is that you said it's how can we take action? How can we prepare ourselves against things like job loss? And the reason that caught my attention and tell me if the people you're speaking to see it differently, but what's so interesting about the notion of job loss is you may hear that as kind of a casual listener and say, oh, well, that obviously sounds like doomerism. But sometimes, like I'm hearing accelerationists talk about job loss as well as it's job loss, but it'll be good job loss. It'll be creative disruption or something like that. Where does the economic component of this fall into place? And how much do you find that we have to get beyond what's actually technologically possible and start to look at the broader societal factors, things like economics, politics, society? To tell you the truth, the piece of this that I'm most invested in and most interested in at this point is the existential risk piece. The fact that like highly educated, very experienced, usually very sober minded scientists are sounding like apocalyptic prophets is interesting. And trying to dig deep into what has convinced them, especially because many of them were busy accelerating this technology. I mean, in the case of a guy like Jeffrey Hinton, right since 1972, against a dark cold AI winter believed that he should dedicate his life to this. And now he's telling us that it poses an existential risk to the entire existence of our species. That's interesting. What's the background? What's the story that how many people are like him? It turns out a lot more than you think. So like the existential piece is the thing I'm most interested in. Obviously, you'll have the biggest impact if those predictions were to come true. But the economic piece, a lot of them will tell you that they talk about that in part because we can imagine it. Like they think that the existential risk piece with AI is so hard to imagine, they themselves don't really know how to paint the picture of what it would look like with the fears of atomic bombs leading to an existential crisis. At least she had a vision of the mushroom clouds of what it might look like. They can't even do that. But they know that they can tap into the reality of an economic crisis. I mean, many people today lived through the 2008 financial crisis. And in some ways we are still in the aftermath of what that did. And they're saying the disruption to the economy might not just be more severe than that, but it would last so much longer and possibly would last forever. Like with the industrial revolution coming in, there were certain jobs that were created never existed before, certain jobs ended. And then the same thing when we came into the technological revolution. And they think that they can actually get people's attention in the short term risks to the economy. And then talk to them about, Hey, do you know what the really big risk is? That this thing might become more intelligent than us. And a more intelligent species is rarely controlled by the desires and the wishes of a less intelligent species. I think that they know that that sounds a little bit bad. And so they focus a lot on the economic piece. Because the economic piece not only I think is a good toehold, but it is obviously a reality. And I think that it's coming a lot faster than we think. I know that you recently talked to Ed Zitron. Is that his name? Is that you say his name? Yeah, Ed Zitron. That's right. Yeah. He and I, we are looking at this moment in artificial intelligence and having such a different response to it. It's fascinating. I believe he said that LLMs can't do anything that these current chatbot models really aren't able to live up to the hype. Well, the hype is pretty big. So I give him that there's a lot of hype. But this idea that they can't really do anything that there's nothing to see here. I cannot find any evidence that that's true. The businesses are already interweaving this into like the foundational aspects of their business. And this is just the chatbot where we're at. And I think I want to remind people the chatbot is to the AI, what the website is to the internet. Yes, though, like the website might not be that impressive to you. But they're not investing all this money in a better website. They're trying to create something like the internet. It's that artificial intelligence behind the chatbot that is the thing that's so exciting. So when you look today and you see that, you know, co pilot and chat GPT are still not able to do things that maybe you were led to believe by some of the hype from the product managers, it was going to be able to do by now. And so you think nothing to see here. Interesting point of view. I want to hear Ed out. I'm glad he is a part of this public conversation. But I just want to put alongside it, like all of these people who are very close to this technology, who are worried about this technology, and they're saying, even in its infancy, we are weaving it into our economy, we're weaving it into our businesses. And so if the ones who are worried about it are right, it does pose all these risks. It's going to become increasingly hard for us to just unplug it. And as it becomes more and more a mess in our economy, back to your question about the jobs, like it's become it's it's it's hard to even imagine the ways we might come to rely on it in the future in a way that like a jobs program is just not going to implement it in 2029 is just not going to be prepared to quickly respond to. If you work in IT, InfoTech Research Group is a name you need to know. No matter what your needs are, InfoTech has you covered. AI strategy, covered. Disaster recovery, covered. Vendor negotiation, covered. InfoTech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe. Let's follow the thread again of the, you know, what you said a little bit earlier, which is that the doomsday scenario is in some ways the most interesting here. And I think your position, Andy, is pretty similar to mine, which is and and Jeffrey Hinden is a perfect example, but I came into these conversations in many cases ready to dismiss these people as kooks and just say like, oh, you're a doomsday cult. You're way out there. But the thing that gave me the most pause. The apocalypse crowd, they got a bad track record, right? There have been many people claiming an apocalypse was coming. Yeah. And they've been wrong. And I'll just say personally, I was raised very religious and it was like a big part of my upbringing, believing that God's apocalypse would likely happen in my lifetime. And when I left that kind of fundamentalist faith, I do think I developed an allergy to anything apocalyptic. So I think we're coming from the same position there, which is like, you know, that could be a bad thing. Because if you're looking to be very open minded, you want to make sure you're not too allergic to anything if you want to try and really understand the world. Well, and that's kind of where I was going is it sounds farfetched. It's completely, as you said, different from anything that we've heard before. And yet when you talk to these people, you know, long enough as you do on your program and you know, listeners to this program, I've heard before, one of the first things you come away with is like crap, they actually do sound like they know what they're talking about. And their arguments are resistant to most, you know, kind of logical, you know, debates you can throw at them, right? Like it's not like this is just like a house of cards or scare pro arguments, and you say one reasonable thing and it collapses. They've thought this through, they've lived and breathed this, and they can, they have a response to everything. And the the piece, I guess, that gave me the most pause is the realization that when you talk to people who try to dismiss the doomers, the best argument it sounds like we have is, you know, well, things have worked out pretty well for us before, and we'll probably figure it out, right? I don't know, maybe you heard something more credible than that, but it's tough to just move in my view. I have two responses to that. I think one is, I think the doomers have become increasingly better at making their case of crafting their arguments. I mean, if you've been following this for a decade, like I have, you'll remember the era of the paperclip maximizer where that was their go to way of explaining it. They've, I think, wisely moved on from that somewhat brilliant, but also somewhat confusing thought experiment into, I think, points that are a little easier for people to grasp, helping to make the same case just with an updated set of arguments and allegories for people to digest. I don't think that the accelerationists have invested similarly in trying to hone a great case. And in some ways that makes sense, because they're winning, right? They're like, we are all accelerating right now. There are no meaningful federal regulations in place to slow them down. There are so many billions of dollars being put in this industry that if they stopped tomorrow, it would probably cause a global financial collapse like we've not seen in our lifetimes. And they're racing one another, like Altman and, you know, Hesabis and Musk and, you know, Dario Amade, these brilliant people are competing not just against the clock and not just for, you know, better products. They're competing against each other to try and be the first to get to this moment of, you know, AGI. And I don't think it's top of mind to them to come onto a podcast and try and really make the case for why the DOOMers are wrong and they're right. And I think that if we start to see the DOOMers get more ground, like, you know, Eliezer Yudkowski and Nate Sores, their book, If Anyone Builds It, Everyone Dies, very catchy name. It made its way onto the New York Times bestsellers list. Like, it's starting to find an audience. You see more lawmakers who are getting vocal about the risks. And that's happening both on the left and the right. You're having interesting media figures as vast as, like, Steve Bannon, who's really concerned about the existential risk. And then you have Megan McCarty. Hang on, what's the princess's name? I always forget. Markle. Megan McCarty. I said Megan McCarty, the columnist for the Washington Post. I really gave her a promotion there to princess. But across this wide range of people's politics, like, this issue has not yet become politically polarized. It's not that there are accelerationists or right-wingers and, you know, safetyists are, and DOOMers are on the left. This issue is like, it's still in its infancy in the public debate. And I think that if we start to see the DOOMers get more and more purchased, I think at that point, we're going to start hearing a better case being made for the accelerationists. So that's like, thing one, thing two, I think that the best case that the accelerationists are making. Personally, it does come down to this idea that we have become increasingly safety-oriented as a society. This is something you hear a lot from Peter Thiel. And like, no matter what people think of Peter Thiel, he clearly has had a massive influence on the world. He was an early investor in all these companies in DeepMind, in OpenAI. He was, even if you're an AI DOOMer, he was an early investor in the Singularity Institute and in Eleazar Yudkowski, who is like the king of the DOOMers. Like, Peter Thiel is all over this conversation. And he's been making, I think, a very persuasive case to people that with the safest mindset that has overcome our culture, ranging from how we parent to how we invest in new technologies as a government, that this safetyist mindset has created a stagnation that is threatening our politics, that is threatening our sense of purpose, and that we feel in all these like abstract ways. Like, when you go to New York City, and then you go like, go to a Japanese city, and you just think, how do we have more money in New York? And as much as I love New York, lived there for many years, like, it's incredible how little progress has been made in decades when it comes to almost anything a New Yorker would want to invest in. And then you would just wonder, like, why are we so stagnant? Where is the flying cars? Like, a shorthand for this? And I do think that if we bring that safetyist mindset too much into the AI world, I think they do have a point that we're some ways allowing ourselves to be ruled by our fears, instead of by our desires to truly reach for more, to believe in something, to believe that the world can be different and better than it is today. I don't think that they make the pitch quite as a with a tone of inspiration as I'm hitting it with here, but I feel it sometimes. I can picture a world that they're imagining where, you know, like one of the accelerations I spoke to, he happens to be a socialist accelerationist, because like I said, there's a large spectrum of different political beliefs inside of each one of these camps. And the one of the reasons that that he was so passionate about us getting to AGI and us really investing in this technology is he was saying like, think about the millions and millions of people since the Industrial Revolution, who have had to do shit jobs, that someone has to do that our society has set up in a way where someone has to do this. And in fact, a lot of someone's have to dig in these mines and they have to clean these toilets and they have to do these repetitive factory jobs. That if we literally came up with a technology that did that work for them, it would possibly be like the most powerful liberating force for like the betterment of humanity in human history. And he's like, that is worth it, not because of some abstract thing that you want one day, but because of the people who are living these like, in somewhat living in these toilsome miserable conditions right now. And that, oh, but we don't know what they would do on the other side. And we, we don't know how we would organize our society and accelerates this, like this guy's names, Alex Williams, what he was saying is that like, those problems, we should solve down the road after dealing with the actual problem that we have right now. And I find that to be quite persuasive as well. But I don't know, it's early in the debate. And I think that over the next couple of years, especially if AI continues to get this massive amount of investment, if they continue to see the incremental or maybe even the exponential growth that they're hoping for, I think that this is going to be the debate increasingly happening, not just in our political spheres, but like around people's dinner tables, friends out for a drink. And like the reason that we put this podcast together and the reason that we're putting it out now is that we feel like, it's kind of, it's time to get your 101. It's time to get an introduction to where we're at right now, how we got here, and like what the three major camps believe we should do next. So let's, in the spirit of the 101 and just kind of unpacking some of this. And by the way, I, for what it's worth, I'm a listener to your podcast and I think it's done a really nice kind of, you know, overview of that piece. Where I wanted to go though is we, you know, we talked about accelerationists winning, winning, we talked about getting more, winning technologically, but also winning financially, I guess you could say right now, because they're getting more investment in these technologies. But one of the pieces that's interesting to me is, I guess the personalities behind some of these different technologies and the fact that you've got, you know, a series of different tech stacks here of different, you know, AI products with people who, I mean, you could draw a pretty interesting diagram of how these people have intersected and, you know, the drama in their lives is like borderline game of Thronezy. But I mean, for the uninformed, how would you classify the main AI products here and kind of the people behind them if that's not too broad a question? Well, let me try and give an answer and you tell me if it's what you're going for. I mean, number one thing that we start off the show and that I start off most of these conversations trying to distinguish between is just like, what is it that open AI or anthropic or deep mind, what is it that they think they're making when they ask investors to invest in AI? And it's not a product, it's not a chat bot. When Demis Asabis and Shane Legg, fresh out of getting their PhDs in neuroscience, come to Silicon Valley in 2010 looking for investors, a part of their pitch was, we don't want to make a tech product, we want to make artificial general intelligence, the complete automation of anything that the human mind can imagine and then beyond. Like they want to make a new species is the way that a lot of the sources that I've talked to say, it's like, it's a better shorthand to say they're trying to create an intelligent new species than it is to say they're trying to make something like a chat bot. So that's an important thing to distinguish. And that dream has been alive under different names. Right now, we call it AGI, but thinking machine or artificial intelligence or automata, these had different names. It goes all the way back to the 1940s. And it goes back to Alan Turing, one of the, like, godfathers of computer science, as he's often called, he's sitting there in the 40s, looking at one of those massive computers with the tube sticking out of it, that was the size of a room, I think it weighed like two and a half tons. He already seeing what this computer was able to do was envisioning a day when it could think as well as a human and thought that when that happened, it would probably take over. Right. And it is, it attracts these really interesting figures who are not only attracted, not only have this belief that the computers, even the computers we have today, they can be so dumb sometimes, you know, they'd be so frustratingly dumb and to think that for all these years, there have been a group of true believers who think that we can achieve that level that level of automation. And, you know, in the podcast, we go through all the ups and downs were like in the 1960s, they really thought that they were close and they sounded a lot like people sound today thinking that we were 10 years away from a true AGI thinking machine, right. And obviously, it didn't happen. But in the modern telling of like where we're at and who these characters are, I think the important thing to know is that the most influential voices inside of the current AI conversation, the most influential tech leaders, almost all of them are the underdogs 10 years ago, that they were not the people who were at the forefront of Silicon Valley when it came to new technology. In fact, the people who are leading the charge leading the race in the US right now, they were the ones who were the most freaked out 10 years ago. Dario Amade, Sam Altman, Elon Musk, you know, Demis Asabis, the list could go on all these top players, you could find that they were investing, you know, millions of dollars into AI safety that they were lobbying Congress, Elon Musk had a personal meeting with President Obama in 2015. Go back and look at Sam Altman's blog in 2015. They're saying over and over again, this thing poses the greatest existential risk to humanity. Like it may lead to our extinction, it may come to see us the way that we see dogs. Some people say the way that we see ants. And those people are now at the forefront of the race, right. They signed a petition saying we should do everything we can to make sure there's no AI race. And now they're at the head of the race. And I think that the cynics and the critics of these people are assuming that what happened is greed, that what happened is that doomerism was bad for business. And so they just pushed the doomerism to the side. And I think one of the most fascinating aspects of the story is that it's way more interesting to that. What happened is that they came one after another to believe that someone somewhere was going to make this technology, that AGI would be created, and that the best thing that they could do for the future of humanity is make sure that they were the ones who made it first, that they because they care about safety, because they care about democracy, because they care about, you know, things like privacy rights, that if they made the AGI, they could use it for good before China, before, you know, if you're Sam Altman and Elon Musk, you know, before Google, right? If you're Dimitri Saba, you know, before Musk, right? They came just to believe that like the acceleration is not just a convenient sales pitch. They sincerely seem to believe that acceleration is salvation, that acceleration is a duty that they have on behalf of the human race. And that's a good story. That's a fascinating story. Whether or not you believe it is, I'm not sure I believe it, but it's important to understand that that's happening. And then to understand that our entire economy, not our entire, but a huge part of our economy and a large part of our stock market, we're all riding on that conversion, you know, and we've yet to engage in a big, robust public debate about whether or not this is the right path. And we're already so far down it. I think that that's, that's, I think, if you ask like the characters, like, that's the spectrum. We've got Alan Turing all the way up to these guys. And then in the middle of it, I guess, the other characters that I really love are the contrarians, which I think naturally this subject would attract some contrarians. But one of my favorite things I learned putting the series together was that there were two camps in the earliest days of artificial intelligence about how you would go about making a true thinking machine. In the dominant camp, they were, you know, the symbolists when you don't have to get into all the technical details of what they were. But they're the guys that made that chess player that beat Kasparov, they, they're the ones who made that Jeopardy player that won Jeopardy, like they were the ones who were seen as like the future of AI for so long. And then the underdogs, the contrarians were these connectionists. And these are people who for decades believed in their vision for how to make an AI. And they, I did not realize just how disrespected they were inside of their own field, how they were completely just like just how far they were pushed to the sidelines. And then how quickly their theories became the engine behind this AI revolution that we're going through. And then immediately these guys who dedicated their lives to this contrarian view, it's like moments after they're winning their Nobel Prize, you know, moments after they're winning their Turing Awards, they then quit their jobs and come talk to people like me to say, we got to stop this thing, it might kill us all. So that's a spectrum of the personalities and the characters that are like funneling the thing behind your, you know, chat GPT and funneling the, you know, engine of progress or engine of profit or however you want to say, or wherever you stand on the positions, like those are the guys that are like behind this AI idea that's become so central to our society. That's great, Andy. And that's exactly where I wanted to kind of take the conversation. And I, you know, had a very similar reaction to you when I heard about all these different personalities, you're all competing to, you know, be the first to, you know, make this artificial general intelligence. And I mean, the way I characterized it, like the first thought that came to my mind, and I don't know if you're a mad men guy, but I love mad men. So maybe the most famous scene in the show, it's in the first episode. But basically, they're coming up with these, this ad for cigarettes, right. And the gist of it is that everybody else's cigarettes will kill you. Ours are toasted. Toasted. Right. Like, oh, yeah. Like, it's toasted. It's toasted. Lucky strike. And to me, like that seems to be the pitch of every single AI leader right now. Everybody else's AI will kill you. Our AI is toasted. Right. Like, do you buy that? And within that argument, does it matter who wins is the question I'm getting at? Well, I think this is like, there's maybe I'm too gullible. I mean, I covered politics largely before this. I've been deep in the world of like, what is causing this deep divide in our politics today? What role does technology play? What role does, you know, the lack of having other strong markers of meaning in our lives and all this. And I'm of the persuasion to take people at their word when their actions seem to back up their word. And whether that's, you know, why a lifelong Democrat in Pennsylvania decides to vote Republican for Donald Trump, or whether that's a technologist who like seems to sincerely believe that the thing he's making could end the human race. And if he doesn't make it before someone else does, it's more likely to do so. I don't I personally, I don't think it's toasted. They seem to sincerely believe it to the point where if you're going to criticize it, it's I think it's better to criticize it the way you would criticize maybe a believer in a religion, which like, you can criticize someone for their religious beliefs. But if you start off with the assumption that they cynically hold them, but behind closed doors, they're not really holding them. I don't think that that puts you on the right footing. Now, there are a lot of skeptics who are out there and even some accelerationists will say this about the AI doomers, that it is marketing, that if you tell people my, but it's almost the opposite of what you're saying with toasted, the criticism they often get is, if you go out there and say, like my thing I'm making my technology I'm making is so awesome, it might destroy the earth. That's a way of saying what I do is important. And you know what, it might get rid of all jobs. And that's a way of saying invest money in me, because you know, put that money in my UPS or something that that's that's a fail in business, we're gonna have robots doing all that. And you know, that that thing that point is out there. But when I talk to like, former open AI employees, they are they're telling me that this is the conversation that they're having in the cafeteria over lunch, like this is what they talk about on Friday night happy hours when they go out, the people there are openly saying, holy shit, I hope we don't make something that destroys the world. So I, I don't think that it's quite the the same thing as, as what you're saying, I do think that it's going to change. And, you know, I think that it's dynamic. When Sam Altman in 2023, went to testify before Congress, right in the aftermath of the massive explosion of chatbots. So chat, gbt comes out in November of 2022. It's followed by all these copycats. And everybody wants to get into the chatbot game. And a lot of weird hallucinations and crazy things are going on. Congress calls Sam Altman and others to come in and testify about what's going on with AI. And it's incredible. It's maybe the most incredible congressional hearing I've ever seen, because he's up there saying, regulate us. Our greatest fear is we might break the world. We want you to regulate us. And yet they never did. Like it's a weird thing to ask for your industry, for your company, for you personally to be regulated. And then everyone else was like, yeah, we should. And they never did. You know, fast forward to May of 2025. He's brought into testifying from the Congress and almost the entire hearing is about how do we help you beat China? How do we ensure that the U.S. remains at the forefront of this? That's an interesting change that's happened. And I could see it continuing to change again, and change back and forth and continue to change directions. And I think as it changes, you're going to hear changes coming from these companies. And we could get to its toasted, you know, X AI, what Elon Musk is doing, it's going to kill the world. But me, what we're doing, it's going to ensure civilization. But I think at this moment, they're not really there. And in fact, I'm kind of surprised that they're not a little bit more open, openly competitive, like Pepsi and Coke, or tea, I think maybe the our generation, it's like T-Mobile and AT&T, like I watch football, they go hard after each other in those ads, like literally making fun of each other's ads in the ads, because they're competing for customers on the same base. We're not yet seeing Claude go that hard at open AI. Even, you know, this is like really insidery, but maybe you follow this, but, you know, even Dario Amade, he left open AI after essentially being one of the most influential people to put them on the path of the chat GPT success that they've had ever since. He leaves before they see that success, but after he's done a ton of work to get them down that road, he's never really come out and said why he left. He's, you know, he's around the edges, you know, they weren't as focused on safety as we wanted, or there was leadership, you know, people pick apart his appearances on Lex Friedman, they're like, oh, is he talking about Sam Altman? It's interesting that the one exception is, you know, in Sam Altman, similarly, very polite, the one exception is Elon Musk, who goes pretty hard at Sam Altman, and of course has a lawsuit against Altman, Suskiver and open AI. But even then, in his advertisements and his talking about GROC and the future of AI, he doesn't pose it exactly the same way as like we are competing. There's a little, there's a little bit of a sense I get, and I don't know what your impression is that they don't want to openly engage in a kind of consumer driven, brand driven competition, the way that other companies have in the past. And I don't know why. That's, I think that's a really interesting point, and I hadn't, I hadn't really considered it before. To me, there's, I guess, like a more generous way to interpret that and a less generous way to interpret that. And the more generous way is, you know, these are all technologists first, and they do really care about what's best for the human race and mankind and all of that. And they don't want to, they just truly don't believe themselves to be Coke and Pepsi or T-Mobile and AT&T. The less generous way is that, you know, and I don't really know if that the American equivalent doesn't work quite as well. But I think about like, in the UK, and like I'm not from the UK, but you hear these stories about like all the top politicians actually were classmates in the same private school. And so it's actually like, it's very this kind of like old boys club. And is it actually because these guys are all, you know, friendly with each other, you know, aside from Elon, who like has to lob his grenades, and they're just kind of in on the take together, or as long as the money is flowing, they don't want to say anything that could potentially prevent the money from flowing. Well, my only insight into that is... It's also interesting, like one is all of them were together in 2015 at a conference in Puerto Rico. I think that Sam Altman maybe wasn't at that one, but was at the next conference a few months later here in the US. But it was a conference organized by Max Tegmark from MIT, where he had all of the people who, you know, at the time were true believers who had not yet made the discoveries that would make the AI revolution. And he brought them together, you know, DeMasso Sabas, biggest name at that time, the head of DeepMind had just been bought by Google. Elon Musk was there. You had, I believe, Ilya Suskover was there, Dario Amade. In that kind of one or two conferences starting in Puerto Rico, Max Tegmark put together, he got all these guys to try and get them on the same page with other true believers like Eleazar Yudkowsky, like Nick Bostrom, these people who at the time were seen as doomers. Nick has since become a bit more of a scout, kind of a scout boring on accelerationists, depending on the day of the week, if you ask me. But he got all of them together to say, look, we're the people who actually believe that AI is going to change the world. We shouldn't fight so much among ourselves. You know, we should work more together. And there was a kind of brief period where that really seemed like it was going to be possible. And we start to see the fissures first in 2018, I believe it was when Elon Musk leaves open AI, then when Dario Amade leaves open AI to start Anthropic. Now we've got even Suskover has left open AI. And there is what I'm hearing, like this weird like, okay, all of us, we can share some of the safety stuff we're learning, but we no longer really talk like the way that one insider said it to me is, all these guys were in group chats together. And one after another, they've been leaving the chat. You know, like the group chats have closed by 2025. I don't know if that's true. But that's what the people who are willing to speak to me on the record, close to these people and close to these decisions are saying. Interesting. And yeah, and you have to wonder, it's just so fascinating that we're talking about, you know, again, to use your words like basically creating a new species here in something that could forever change the course of human history. And it comes down to a handful of dudes who may or may not be in the same chat together anymore, which is just like I don't know, like it is a lot to process. It's a hell of a story. That's why we put the podcast together and why it originally was, you know, it was really supposed to be four episodes, that's eight, at one point I thought it was gonna be like 15. There's so much to unpack here. And it truly is like a sci-fi movie, which not to go on too big of a tangent on this, but one of the reasons it turns out that it's like a sci-fi movie is because of the advancements that were made in artificial intelligence in the 1960s, when the US was investing heavily in the space race. They didn't just throw money into aerospace, they threw a ton of money into computer science, and eventually that would help us have like the semiconductor. And there's, you know, it's Legend, the GPS technology we wouldn't have had had enough of that money. One thing people I don't think realize is that a ton of money went into artificial intelligence research and like helped to fund the first AI labs at MIT and other universities, you know, under Marvin Minsky and John McCarthy and Claude Shannon, these like really interesting early figures in AI. And with the kind of optimism of the time and seeing the rockets take off into orbit and the AI scientists and researchers, they had this sense that like by the time we're walking on the moon, we're probably gonna have these AI's in our lives. And it turns out that they didn't, they couldn't down the path they were on achieve the benchmarks, they were guilty of overhyping and under delivering. And so the field went into a bit of an, what they call an AI winter. But because it had become this thing, certain science fiction authors and filmmakers became obsessed with the idea of what it would be like in the future to do this. And one of the people that became so obsessed was Stanley Kubrick. And it's in 1968 that he creates 2001, a space odyssey in collaboration with the leading AI scientists of the time, and with the leading people who were concerned about the future of AI at the time. And he tried to infuse the reality of the world that they thought would come about and the risks that we would have in that world and the benefits of that world by talking to these people. And it made such a compelling film that it became this genre of film, this genre of fear that we have. And so truly, like the science is what inspired the science fiction, then the science fiction has been such a stable part of our lives for so many years, that it poses this interesting problem for people right now, which is that you've got the doomers and the scouts and the accelerationists who are saying, this is real, join the real debate. And for some people, they're like, that's just sci fi stuff, man, you know, like, yeah, we're not going to do it. And then for the accelerationists, they're like, no, no, no, don't be scared, those were just movies. The terminator is very unlikely. The matrix doesn't make sense. Why would they use us as batteries? And, and, and it just reveals how at the end of the day, what we believe, like the stories that we attach meaning to, it's like the ultimate technology. That's really the hardware or the software I can ever think of what's the best metaphor that's running our world right now. And you see that with this investment that's happening to people have come to believe that this thing's going to work. Whether that's true or not, I think it's too soon to say. But it's interesting to me as a reporter, especially a reporter who's like so interested in debate, like what people believe, how they got to these beliefs, and then what's going to happen next as this becomes a bigger part of our, you know, national and global conversation. So do you, you know, when you look at the, your role as a reporter, I mean, obviously, so much of where you're focused is kind of on the, on the cutting edge and what's next and where this is all going and, and, you know, the people and the stories and how those change. One of the pieces that's, I don't know, prop, it's probably not right to say this, but just like less exciting. If I can call it that is just like where the rubber hits the road in terms of how people are actually using, you know, the current, you know, AI tools in their daily lives, whether that's individuals, whether that's organizations. Is that something you cover it all? And, you know, if it is, you know, what are you finding in terms of, you know, what's, what's most promising there and what does that look like? Uh, I mean, the way that I'm thinking about it, like, so the company that I run with my buddy Matt, it's called Longview. Like our big thing is that there's a lot of media that's looking at our moment and being like, did you see this just happen? Oh my God, this new scandal, the Sydney, Sweeney thing, oh, Pete Hegseth signal gate and all the sub stacks get together and then like two weeks later, oh, there's a whole new thing. And we're dedicated to saying, okay, let's look at everything happening in our moment from this long review, like how do we get here? What's interesting context to bring to this that can like really be useful in people's lives as they try to navigate what they want to do, what views they're going to form and like looking back through history and looking forward like what happening. So I've been a little bit less invested in some of the like hallucinations that these things are having and more interested in what about this technology is leading it to have these hallucinations. Oh, it actually turns out it's this like decades old problem that if you want to use these neural networks as your base models, they are incredibly capable. But you have to make this trade off where you'll never quite know how they work. That's really interesting to me. So I've kind of been a little bit more big picture. That being said, I am a person who uses words for a living. It's fascinating that these LLMs have been this massive surprise in the industry to everybody. There's also nothing I don't know if everybody's aware that like the money was not being bet on the LLMs until very recently. This was a totally novel approach to getting to this place that people wanted to get to for so long. And I find them to be surprisingly useful, even as they are not yet as useful as like maybe think of it like this, Jeff, like in the 1890s, there are all these like world's fairs, I'm talking to you from Chicago, things 1893. Chicago hosts the World's Fair. And that would have been the first time that millions and millions of people ever interacted with electricity. You just imagine showing up and seeing thousands of light bulbs, seeing like the first prototypes of a radio, right? Like it's just hard for us to even imagine what that would have felt like. But I bet they were pretty janky. And one of the reasons I bet it was tough is that like no one looks at that light bulb or at that radio. No one could ever envision Wi-Fi. No one could have envisioned how this thing would become so important, not just to our economies, but to how we govern and how we communicate that now in the world we live in, if we lost electricity right now, it would probably push us into a post apocalyptic situation. Like I don't know, I don't even know what would happen within a month if the lights went off, if electricity went out and it didn't come back for a month, who would we be in a month? It's like a disturbing thought experiment to try and go down. And that wasn't that long ago. And I think that there's like part of me that waivers between, Oh, wow, this thing really helped clean up all the typos on this email that I was going to send. Love that. And then the running into this, I was just like, is this a light bulb at the world's fair? I don't know. Some days I think it could be, it really might be. And other days I think, I'm not so sure. So I kind of waver back and forth between the two. It's just interesting to know that that is happening alongside companies, businesses, millions and millions of people are using these things. And that is unlike previous new technologies. You know, when the internet was coming out, it took so long to roll out. We talk about the dot com bubble, it's been happening a lot lately, there's this fear about an AI investment bubble, which I think it's probably found to prove true in one way or another, because it's how insane the investment is. But the thing that's different about these AI systems now is just how much massive amounts of people are interacting with them, even as they're in their infancy, even as they're so new, even as they're not yet able to deliver on the things that people want. There's something about the experience and the chatbots and that people are, they're showing up for and they're sticking around. You know, like when a new technology, remember when threads comes out from Instagram? It was massive. It was so huge. People didn't stick around. You know, like this thing's now been out long enough. People are showing up, they're sticking around. It's getting integrated into everything from our search engines to our, you know, military security. Those are the times when I feel like, hmm, this is a little bit more light bulb at the world's fair, you know? I haven't heard the light bulb at the world's fair. And I really like it. And I want to use all of this as kind of a backdrop for my next question, which may make it more fair or less fair, certainly not the easiest question to answer. But I mean, you started that answer by saying that you're in the words business and that you found use for this technology in the words business. And I think about the impact that's already being had on media organizations and that will continue to be had on media organizations and not just the producer or the supplier side, but just how people interact with content in this world, whether it's how they produce content, whether it's the value has that that content has for them. It would be weird to be at the world's fair and say, you're a journalist, Andy, how do you see yourself as a journalist using electricity in 1893? But at the same time, I am curious what you see the advent of this technology, even, you know, the changes in the last three years and the potential trajectory of where it goes. What does it mean as a media producer for your industry? And how does people's relationship with content impact our institutions around the stories that we tell? Well, I'm a bit of a contrarian on this to tell you the truth. I can't imagine a technology that fucks up our industry more than social media. So I don't I don't tremble in fear. It's yet to really be reckoned with how damaging social media ended up being on almost every level for the institution of journalism, whether that's the incentives of journalists to play for likes and attention, and to turn into brands instead of being people committed to values. And that's been tough. Or whether it's the fact that the, you know, newspapers who invested in reporting were not able to compete with the money being made by the quote unquote journalists who just give hyperbolic hot takes that do well inside of the algorithms. That didn't work out well. The polarizing politics, which it's hard to know, which is the chicken and which is the egg when it comes to media social media and politics. But boy, that didn't work out very well. And we find ourselves in a situation where people trust journalists less now than in the history of polling around trust and journalism. Like that's how far we've fallen. And we keep going another notch in the wrong direction, even as for the last three years, you know, we have media conferences about this, what are we going to do to earn back their trust and all this. So I think that whatever happens with AI, like we're already doing so poorly that we're not going to be able to blame it for much, you know, we're so far down. On the other hand, I also think that one of the reasons that people don't trust journalism is because of the self fulfilling prophecy of it, that the algorithms are giving us what we say we will give our attention to. And I don't always know to how much of this is just human nature on display. And that we you there's not going to be a technological solution to human nature, right? Our human nature is something we have to grapple with at a deeper, stranger level. But I will say this, when it comes to the AI tools as they exist right now, there are things about them that are obviously freaky, they hallucinate, they make stuff up. If you go on there and just ask it about yourself, it's hilarious. Say I'm about to interview myself, I'd love some prep material, it'll just make up jobs you didn't have, you know, like, that's such a weird move. And even weirder that the people back in the company don't know why it's making that thing up. Those I feel like are incremental incremental changes that get better over time. But even in its infancy dummy state, even just the chatbot, right, which is different than the AI, if you go and ask it a polarizing question. For example, I did a series with my colleagues, Matt and Megan, about JK Rowling, where we got to interview JK Rowling, and spend time with her getting to understand why she waited into this big public debate about sex agenda. And then we spent time with her critics about why they're so upset about it. And we tried to help people, you know, see this complex debate more clearly than they were getting a chance to see it on social media. If you Google that the time, why are people mad at JK Rowling? Google will give you a recommendation of articles that you could read, based on the amount of attention those articles have gotten in the past. And so those articles are almost all hyperbolic. They don't actually, you could read those articles and not know why people are really mad at JK Rowling, not really know what JK Rowling believes. But then as an experiment, I went to chat to chat and this was chat to chat for and just said, why can you tell me in good faith, why are people upset with JK Rowling? Incredible, blown away. And then not only is it nuanced and really helpful and like, here, if you want to read her words about this, if you want to read her, and here are some like powerful things from her critics, it like helped you see this and then it through because I know that Sam Altman did this, I think with the pivot to 3.5, the team at Open AI told it to be nuanced about hot button issues. It says, remember when you're forming your own views about an issue like this to take in multiple perspectives. And part of me thought, maybe journalism deserves it. You know, like maybe this thing could inform the public better than we can. Because I'm not just hot button social issues like that, but almost any issue. We have been failing the American people, we're just not doing a great job. And if they decide that the AI is more nuanced, I don't know. It's a weird thought, but like, if we can't earn people's trust and respect, then I'm not going to be upset that they decided to look for a solution elsewhere. I'm trying to be the journalist and the kind of journalist that earns their respect. But I just don't think that our industry has much, you know, tissues to grab. We don't have the sympathy violin that we could be pulling out right now. I don't think most people around the country are looking at us and be like, but you're indispensable and you're doing such amazing work. I don't think we have been. And if I tease that apart a little bit, and it's, you know, it's really interesting to me that, you know, there's this these platforms, this social media that rewards content. And I'll use content like almost spit out the word content when I say it's content in the pejorative sense. And it sounds like you're saying that that journalists almost like they scored an own goal by saying like, oh, we don't do journalism anymore. We do content. And like, like to what degree did journalists actually precipitate this, this rapid decline in trust by changing what they were creating into a much less trustworthy version of itself? Like, is there is there still room for journal, like for journalism or for, you know, some sort of journalistic organization to say, actually, we don't do content content and journalism are not the same. And we know that. And we're doing journalism, damn it. And that's a, you know, a third pillar against content, you know, the, you know, traditionally created content or social content and whatever chat GPT is spitting out for you. Oh, I think we can definitely do it. We've been here before. I don't know how familiar you are with the history of American journalism. But boy, I mean, our past is not, it's not squeaky clean. We've been journalism obsessed since our founding because the fact that we're a republic. And our founding myth was like, everyone gets a voice and everyone should have an opinion. So everyone should read a newspaper. And like, you can read Europeans accounts of like trips to America. And they're like, my barber had an opinion on politics, you know, like, everybody here thinks that their voice matters. It's crazy. They're all reading newspapers. But they were very partisan. And most newspapers were owned by a party, they were owned by and they were understood to be partisan, you know, and this was I think, fine when there was a sort of shared sense of like national purpose. And with the introduction of big capitalism during the Industrial Revolution, you started to see these papers getting bought up and these conglomerations. And this is what people often refer to as the age of yellow journalism. Because the newspapers, they weren't just that they were partisan, they were they were so full of bullshit. They were they had taken exaggeration so far in the spirit of competition, right? This is the age of the new the news boy, you know, the newsy on the street corner, it was like, you know, mother, here's the children, you know, read all about it. Like, and you'd find out like, well, I'm not sure that actually was the story. But like, at least I got your attention. I sold you a paper today. And we got out of the yellow journalism racket, it wasn't easy. And you can look back for like, who helped us get out of that and get to an era where we were trusted again. And, you know, it's a good story, I can't go the whole thing now, but I will say this, that's where the New York Times becomes the New York Times is like, a lot of us knew it growing up is because you had this rich guy from Jacksonville, Tennessee, named ox, who goes and buys the struggling New York Times, which was a Republican partisan paper who was failing in the competition with the other yellow journalist newspapers of New York. And he's, I think, brilliant marketing, as well as brilliant change, said, what if instead of doing what they're doing, what if our new motto is all the news that's fit to print, we only publish fact based, right, that became their thing, we're going to do fact based journalism. And it ended up not just, you know, appealing to people who wanted to think, yes, I prefer the facts of the New York Times over that shill from the New York world, right. And it had kind of played into people's pride. But it was also great for advertisers, because they would rather sell their shaving cream next to a, you know, a respectable article that had been fact checked, rather than the BS thing about the drunk that killed the hooker last night that didn't even turn to be true. Changes like that, you know, they weren't obvious, but they did, they did come about, I think changes like that could happen now, I know people, including a lot of wealthy people who want that to happen. But it comes down to like, it's this complicated web, because one of the reasons that journalists did the things that made our industry lose so much trust is because there was a demand for it, is because people wanted that. And it's really complicated. I mean, if you look at, say the New York Times, right, they pivoted so far under, I worked there for a number of years, and I worked on the politics desk. And boy, it's understandable why they have an imprint, they have this reputation for being biased. I'll tell you that it was, it was coming, the calls were coming from inside the house, right. But nobody wanted to publish anything that was wrong. They just didn't want to publish a piece that would get people who they liked mad at them on Twitter. And so they would think, how do I word this in a way to like, my fellow graduates of Yale still think I'm cool, or like, don't think that, and then they would get punished if they published something that was too nuanced, right. And I do think that they eventually, like under their new leadership, especially they're trying to pivot away from that, they want to become seen as a less biased and more, you know, a trustworthy source, no matter what your politics are, I'm in contact with editors and people, they're still like, it's a noticeable attempt they're trying to make to do that. And yet you'll see them get paying for it by, you know, especially people on the left, because like they've, they've kind of, you have gotten themselves in a situation where they have a large subscriber base that expects them to be on the left. And so if they do anything that makes it look like they're not on the left, these people now feel betrayed, they get upset. It's just hard to know. To bring it all the way back to AI, though, one of the things that the accelerationists, some of the accelerations I've talked to, one of the pictures of the future that they paint that I find attractive, is a picture of the future where we somehow get out of the scrolling game, we get out of the screen game, that, you know, when I've asked them about their concerns about the attention economy and how much our attention has got sucked up into these algorithms, and then we feel kind of bad about ourselves afterwards. And if you ask somebody like, what was your favorite reel you saw last month, they can't even remember one reel they saw, they saw a million of them, and they spent hours doing it. Like, what if that happens? And he said that his hope is that AI is this healthier replacement for that, that you're having a conversation with an intelligence, who's like invested in wanting you to achieve your goals. And that, that AI that you're in conversation with that AI is that's helping you, it's not being funded by you scrolling to the next sexy or terrible or frustrating thing. Like, its goal is to actually make you deeply feel like it has been useful to it. And I don't know, I don't know if that's going to be possible. I like though, that that is one of the motivations behind, you know, especially the product side in this AI revolution. And I'm rooting for them, you know, I would rather live in a world where we're deep in conversation about things meaningful to us, even if it's with artificial intelligences, then if it's continuing to go down this TikTok, Instagram, race to the bottom of the brainstem, just total nihilistic, almost like attention addicts that we've become. And I don't think any of us like it. Like, that's the thing too, is that consumer products are supposed to be, you know, about what we like. I don't know a lot of people are like, dude, I spent the best three hours of my life on TikTok last night, you know? I woke up this morning, hid Instagram reels for two hours, best, oh, hated to go to work, I was having such a ball. No, that's not how people feel. I love that vision. And it's so encouraging for me to hear that there are people in high places saying that. I want to believe, Andy, I want to believe that the piece that concerns me is, and you may know more about it, but the worry that it actually goes in the opposite direction. And that, you know, seeing, you know, one of the things that chat GPT, just to call out one, has gotten really good at, good is an interesting choice of words by me. But one of the things that they've started doing is whenever they finish an answer, I don't know if you know, chat GPT will always say, can I give you more information on this? Right? Yeah. Let's keep it going. Let's keep it going. Stay with me a little bit longer. Hang out, you know? Let's make this three hours versus like, you got what you need, you know, get out of here. It's that notion of stickiness, it's that notion of engagement. And it feels like that's kind of the Facebookization or the metaization or whatever you want to call it, of some of these tools. Do you worry about that? Not yet. I think there's enough to worry about between the existential risk and like the troubles with current social media. I mean, I know what you're talking about, and it's interesting because I maybe not surprising as a person makes podcasts, I talk to it. I love its audio component. And I will sometimes just, you know, be making dinner. And I'll have a conversation with it about I talked a lot about religion, I talked a lot about history. Of course, you can never take everything it says as like biblical truth, but it's an interesting conversation. It can be more stimulating for the mind than maybe just listening to a one way conversation. And I notice when it started, it used to just answer and then stop. And then it started to make a prompt. And I knew something like, oh, that's really interesting. The part of it that I was clinging on wasn't like, oh, now I'm going to spend more time talking with you, because its questions were not like, if you're one of if you're talking about like paintings during the Enlightenment, maybe what you want to talk about is, you know, how stupid the Republicans are, or how awful the Democrats are, you know, like it was recommending things based on the interests that I actually had, it was wanting me to have what might be an even more interesting conversation that seemed to really be following my interests and my likes, not following my like base desires for like, this guy owns this other guy that you don't like. So I'm listening to him. But what I was, but what I was more interested in is the, is this, is the fact like, oh, at one point in time, I would have been more likely to call a friend. I would have had this conversation with a friend. But like I was born in 1984. Like, we're some of the last people who really loved the phone call. And I just, I've noticed in my social circle, I wonder if in yours, the like hour long catch up on the phone, which was such a staple for so long, it feels like it's gone. Like it's, it's, so it's, it's such a special treat that like, few times it happens. And I think that the conversations with chat GPT about like some book I've read, or they want to know more about the author or or something like, like, oh, these are conversations I would have had with people before, I don't know, the pandemic, like it's hard to know, like when did, when did we all become a little bit more siloed? And is that siloing only increasing? And yeah, that's the thing like four or five years down the road. Are we only going to become more isolated and lonely in talking to like, not even a real robot, you know, like talking to this digital being that may have no awareness of word generator. Yeah, exactly. So that's the thing I think about. But then, you know, when I bring this up with the acceleration is camps, and some of them will say like, what if it itself actually becomes a being and it has something like consciousness, consciousness, you know, whether in five to 10 years, we will have the same amount of I don't know, emotional distance between us and them, us and them as we do now that maybe there's a world where we will begin to merge and where 20 years from now, you know, real relationships could exist. And that feels a little above my pay grade right now in 2025. But these are not crazy, random people talking about this. I mean, a lot of these are people who are very invested in this technology who, and that's just interesting for us to take note of as we are like, heads down looking at like, okay, what harm did this chat bot cause this week in this way, like to know that in the room where they're tweaking and working on it, they're having real conversations about how we will merge either socially or maybe even physically with these things and how society will be reshaped in ways that we can't imagine, you know, the same way that you couldn't have imagined looking at that first light bulb, the way that electricity would shape the world. They're like, this is going to be even bigger, which was that I don't know if it's true, but I think the time has come that people realize that's how they're thinking. They appear to be sincere. They have the money. They have the means and increasingly they are trying to make their dreams a reality and that reality may shape the future of the world. And before it does, I think that the world should, you know, decide to debate this, like hear them out, hear their critics out and take this, like engage in the messy public debate that I think this moment demands. I agree. I think it's super, super important. And I mean, just again, as you're talking, like, what comes to my mind is the difference between this and electricity is that electricity didn't have, like, sure, it has an impact on the future of the world, but this really feels like it could decide whether or not our species continues, right? Even if you forget the doomsday, like, it kind of feels like in your description, we're almost like building our own matrix, right? Like, we're creating an isolating environment for ourselves that's more enjoyable than spending time with other humans to the degree where it could completely isolate us from other humans. And what does that mean for our ability to reproduce and continue? And again, like, I'll echo what you said, that's like above my pay grade. But it seems like, yeah, maybe we should be talking about that. Yeah, I mean, like, it's so strange that when I first decided to report about this, you know, about a year ago, some of my colleagues worried for me. They're like, oh, this is gonna, you're gonna come off like a crazy person. Like, it's like, that's like, oh, what? People are gonna think you're a total loon. And fast forward to a year, no one has, I mean, it doesn't appear to people think I'm a loon. And like, it does feel like things that even just a year ago to imagine are now more imaginable. And it's not my job as a journalist to try and like, predict the future, like, predict the future, we've been so bad at that, I think we got to quit it. But I do want to like, be the person who like, goes into a room and then goes, here's what's happening in that room. And like, one of the conversations that's happening in the room of technology, you know, the metaphorical room, but also like literal rooms, is like them having serious conversations about what the caveman would have imagined his ancestors doing if they lived in a world where they had access to a ton of food. Like, what would we do in a world where we didn't have to spend so much of our lives hunting and gathering and being hunted and hiding in caves, well, if we were the dominant species on the planet, and had access to many resources, what would we do? And there's no way that they would have imagined like us betting on NBA basketball. And like, there was no way they would have imagined like all the things that we do. And so like, what the accelerationists will often say about that is like, don't get too fixated. On the future, because transformative technology like this is going to bring about a world where, you know, you're worried about jobs. None of the things that you and I are doing for a living, our great grandparents would have thought was a job. Like, my grandfather was a farmer, my father was an oil man, and I make podcasts for a living. Like, we already don't have jobs, according to like, recent history. And a lot of these people, I think make a compelling case that like, we are still going to find ways to do things for each other for some sort of social and financial reward, whether or not that takes the form of capitalism in the future, like, I think that's really up for grabs. Capitalism's time may have to come to an end if we do have abundant energy and abundant intelligence. And a lot of them will admit that too, even though many of them are capitalists. So that's like on the one hand, on the on the doomer side, they're saying like, you think you're not properly scared enough, because you have in your head these cartoonish ideas of the threat that we cannot imagine the threat posed by a true artificial general intelligence, just like those cavemen could not have imagined an atomic bomb. Right. Of all the dangers that they could conceive of, splitting the atom. No, no way that they no way that they have that. And of course, that's the trajectory of many, many hundreds of years. The thing that they're particularly focused in on is that it appears as if with this AI revolution underway, even if it's 20 years, the horizon of great change has shrunk. And, you know, they'll tell a lot of stories about previous moments and different genres of technology. You know, it's 1903, I believe, when the Wright brothers get their flying contraption to go 120 yards in Kitty Hawk. After years, by the way, of pessimists saying it'll be a thousand years before humans fly, like there's that famous New York Times, New York Times op-ed, right? Like, oh, this will never happen. It's a total waste of money. Human beings weren't meant to fly. They could never fly. 1903, I believe it's 1903, 1906. You get the Wright brothers flying 120 feet. By the 1940s, like, we're fighting world wars with airplanes, you know, like, by the 1960s, we're flying a rocket to the moon. Like, it happened so fast. They repeat like my grandmother was alive for both things, you know, and they were saying, we are about to enter an era like that. And we can't imagine, because this one isn't just a traveling technology, you know, this one isn't just a technology for physically. This is, we're talking about intelligence, we're talking about the thing that was at the root of the discovery of all that. And what once again, whether or not we believe them, whether or not that they're right, I don't, that's not my job. My job is to tell you, like, this is how they're thinking, while many of us are engaged in like, what I think they would consider to be more frivolous debates on the internet, you know, and whether or not you agree with any of them, they're having an impact on the world. And like, they may end up shaping the future without us as a society really jumping in to try and have a voice in shaping it. I love it, Andy. I got, I've got goosebumps from, from that last little bit. It's super compelling. And I feel like we could talk for like the next like four to eight hours about everything that we just covered here. This has been such a fun conversation. I wanted to say a big thank you for, you know, telling so many interesting stories and sharing so much insight in the space. It's been awesome. Well, thanks for having me. And I appreciate the invitation of when you come talk again. Absolutely. Maybe get Ed Zittran to debate you next time. I'd be down. If you work in IT, InfoTech Research Group is a name you need to know. No matter what your needs are, InfoTech has you covered AI strategy, covered disaster recovery, covered vendor negotiation, covered InfoTech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe.