Year In Tech: Will There Be An AI Catastrophe In 2026?
30 min
•Jan 2, 20264 months agoSummary
Nicholas Thompson, CEO of The Atlantic, discusses AI's trajectory in 2026, the gap between investor optimism and public pessimism about AI, and emerging solutions for compensating content creators whose data trains AI models. The conversation explores AI interpretability challenges, the role of spirituality in an AI-driven world, and predictions for real-world AI incidents that will force greater transparency.
Insights
- Massive divergence exists between investor enthusiasm for AI (stock valuations up significantly) and consumer skepticism—people actively avoid AI-labeled products despite AI's genuine capabilities
- AI companies lack fundamental understanding of how their models work; even executives like Sam Altman admit uncertainty about knowledge transmission mechanisms, creating governance and safety risks
- The value of training data has collapsed due to synthetic data capabilities, but real-time human-generated data remains valuable and could become a new compensation lever for creators
- AI's narrow intelligence (trained on text/language) lacks spatial and physical world understanding that children develop naturally, limiting robotics and embodied AI applications
- Public perception of AI is shaped by job displacement fears and wealth concentration patterns from previous tech revolutions, not by actual current labor market impacts
Trends
Shift from text-based AI interfaces to embodied AI (wearables, devices) as the next platform competition, with Apple vs. OpenAI as primary competitorsGrowing focus on AI explainability and interpretability as regulatory and safety imperative, likely accelerated by real-world incidents in 2026Emergence of attribution and compensation models (ProRata) for AI training data as alternative to litigation/legislation for creator rightsRise of spatial intelligence and world models as next frontier in AI development, moving beyond language-only trainingIncreasing use of technical barriers (Cloudflare) by publishers to control AI scraping, shifting power dynamics in data access negotiationsDivergence between AI capabilities (increasingly powerful) and AI adoption/productivity gains (modest), suggesting implementation and trust gapsReturn to spirituality and faith frameworks as sense-making tools in face of unexplained AI phenomena and technological uncertaintyJob market bifurcation: near-term displacement in customer service/engineering, but long-term advantage for AI-native younger workersSynthetic data replacing human training data for model development, but new information about real-world events remaining valuableGrowing concern about AI-based trading platforms and autonomous systems causing market disruptions without explainable decision-making
Topics
AI Interpretability and ExplainabilityAI Model Training Data CompensationInvestor vs. Consumer Sentiment on AIAI Safety and Real-World Incident PredictionSpatial Intelligence and World ModelsAI-Powered Robotics and Embodied AISynthetic Data vs. Human-Generated DataAI Job Displacement and Labor Market EffectsAI Regulation and LegislationSpirituality and Faith in AI EraAI Search Engine PartnershipsOn-Device AI and PrivacyAI Content Attribution and LicensingPlatform Competition: Apple vs. OpenAIAI Hallucinations and Knowledge Gaps
Companies
OpenAI
Partnership with The Atlantic for training data and search engine development; Sam Altman discussed competition with ...
Anthropic
Published interpretability research on knowledge transmission via number sequences; developing Claude with focus on s...
The Atlantic
Nicholas Thompson is CEO; has two-year partnership with OpenAI for data and search engine collaboration; uses Cloudfl...
Apple
Identified as primary competitor to OpenAI for next-generation AI platform; developing AI-powered wearables with John...
Google
Competing with OpenAI on AI benchmarks and capabilities; developing AI search functionality
ProRata AI
Founded by Bill Gross; builds AI attribution models that identify data sources and enable revenue sharing with conten...
Cloudflare
Provides technical infrastructure to block AI scraping; enables publishers to selectively allow access to specific AI...
iHeart
Podcast network and sponsor; promotes podcasting as advertising medium for businesses
People
Nicholas Thompson
CEO of The Atlantic; technology journalist; board member of ProRata AI; discusses AI trends, partnerships, and 2026 p...
Sam Altman
OpenAI CEO; discussed AI competition with Apple, synthetic data strategy, and device-based AI interfaces at lunch wit...
Bill Gross
Founder of ProRata AI; inventor and entrepreneur; created attribution technology to compensate content creators for A...
Fei-Fei Li
Building spatial intelligence company; working on AI that understands physical world through vision, not just language
Dario Amodei
Anthropic CEO; mentioned as not fully understanding how AI models work despite leading interpretability-focused company
Johnny Ive
Apple design lead; reportedly building AI-powered wearable devices (nose ring) as next platform interface
Osvaldo Sian
Co-host of TechStuff podcast; conducts interview with Nicholas Thompson
Cara Price
Co-host of TechStuff podcast; participates in discussion with Nicholas Thompson
Quotes
"We have no idea how these models work, right? We know that if you give them more computing power and you give them better training data and you can push them in one way or another and you can put prompts in, but we fundamentally don't know how they work, right? No one does."
Nicholas Thompson•Mid-episode discussion on AI interpretability
"2025 in a nutshell, investors have never been more optimistic about the future of AI. And normal people have never been more pessimistic about what it means for them."
Nicholas Thompson•Discussion of investor vs. consumer sentiment
"Maybe what will happen with AI is you know we have built this thing that is so much more intelligent than us. And we look at it and it be like standing naked in a mirror. And we'll like suddenly have more humility."
Nicholas Thompson (quoting Ricardo Stefanelli)•Spirituality and AI discussion
"Our real competition is going to be Apple. I don't think that text is going to be the main interface for AI. It's going to be some kind of a device. And when pressed a little bit further, it's a device that you'll have on your body. It'll be in your ear, maybe it'll be a nose ring."
Nicholas Thompson (quoting Sam Altman)•Discussion of future AI platforms
"The most interesting thing in tech for 2026 will be explainability. I do think we're going to have some kind of an incident next year where AI does something terrible and we're not going to know why it did it."
Nicholas Thompson•2026 predictions
Full Transcript
This is an iHeart Podcast. Guaranteed human. Run a business and not thinking about podcasting? Think again. More Americans listen to podcasts, then add supported streaming music from Spotify and Pandora. And as the number one podcaster, iHeart's twice as large as the next two combined. Learn how podcasting can help your business. Call 844-844-iHeart. I'm Amanda Knox, and in the new podcast, Doubt, the case of Lucy Letby, we unpack the story of an unimaginable tragedy that gripped the UK in 2023. But what if we didn't get the whole story? I've just been made to fit. The moment you look at the whole picture, the case collapsed. What if the truth was disguised by a story we chose to believe? Oh my God, I think she might be innocent. Listen to Doubt, The Case of Lucy Letby on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Hi, it's Joe Interstein, host of the Spirit Daughter Podcast, where we talk about astrology, natal charts, and how to step into your most vibrant life. And today I'm talking with my dear friend, Krista Williams. It can change you in the best way possible. Dance with the change. Dance with the breakdowns. The embodiment of Pisces intuition with Capricorn power moves. So I'm like delusionally proud of my chart. Listen to the Spirit Daughter podcast starting on February 24th on the iHeartRadio app, Apple Podcasts, or wherever you listen to your podcasts. This is Special Agent Regal. Special Agent Bradley Hall. In 2018, the FBI took down a ring of spies working for China's Ministry of State Security, one of the most mysterious intelligence agencies in the world. The Sixth Bureau podcast is a story of the inner workings of the MSS and how one man's ambition and mistakes opened its vault of secrets. Listen to The Sixth Bureau on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Welcome to Tech Stuff. I'm Osvaldo Sian here with Cara Price. Hi, Cara. Hi, Oz. So today I'm excited to welcome Nicholas Thompson back onto the show. He's the CEO of The Atlantic and a big technology buff. He has a recurring video series called The Most Interesting Thing in Tech on LinkedIn. My fave. And he also hosts a podcast called The Most Interesting Thing in AI. I wanted to invite him on for a roundup of his most interesting stories from 2025 and discuss what he's looking ahead to in 2026. But I also wanted to talk to him about his rather remarkable new book, The Running Ground, which I read in one sitting. I can sort of guess, but what is The Running Ground about? It's kind of a memoir about Nick's battle with cancer, his relationship to running, and his relationship with his father, and how those things all connect in surprising ways. When we talked, I asked him about finding his dad's unpublished memoir and how he chose to weave it into his own story. And, well, you just have to listen to it. So I get this unpublished memoir my dad had written, and I start to read it, and it's dedicated to the seven grandchildren. That's great. It's like, oh, that's so good, Dad. That was, like, so sweet. there's like an introduction about pain and many lives and you know the eras he's been through and it's kind of nice i'm like wow my kids will enjoy reading that and then it's like talking about being in asia and then it is literally like it's probably page four page five a description of the penis sizes of men of different races across the world what yeah nick was quite confused how like you wrote the dedication page and you've like edited this at what point did you think like this gotta to stay in, right? Like, it was like kind of racist, like super weird, like definitely inappropriate. My dad was gay, had a sex addiction, like had affairs with many, many men, was like, kind of ran a brothel in Bali when he was at the late state of his life. So this is an area in which he was an expert, but oh my God, right? And so you couldn't get into too deep a mode or too elegiac a mode because like every four pages, there's something where you're just like, dad. This is not what you're expecting, right? Not in the least. I had no idea. It's a pretty interesting book. And we had what some people like to call a wide-ranging conversation. I can see that. We talk about running and how it provides a space separate from technology, but also about how tech can be used to optimize running. We talk about the emerging relationship between spirituality and technology, something I know you're very interested in. Very interested. And also about the dichotomy between the market's optimism about AI and the general public's pessimism about what it's going to do to them. And we talk about a company creating a product for AI models to cite their sources and compensate the content creators who come up with the information. That is really interesting. So all these publications and people whose work is training the models could actually maybe be compensated? That's definitely the hope, and we'll get to that. But we started our conversation talking a bit more about Nicholas's book, The Running Ground. I want to ask you about The Running Ground. Great. And it's a fantastic book, which I devoured in one sitting. The quote which really stuck with me was, running has long been a way for me to waken the memory of the beloved. What does that mean? Well, so that comes from, or was inspired by a Maximus of Tyr quote about trying to find God and understanding in different objects. And what it means for me is that in life, we all have different things that we use to think more deeply or to bring us closer to the people we care about the most or particularly those who we cared about the most who are gone. And for me, it's running. It's what allows me to meditate. It's also a way I mourn my father, who's a very important figure in my life. It's a way I get myself into a deeper spiritual space. So that's what I meant. And it's interesting as well that you reflect on how much running is on the rise. It is. It really is. And I think it's a, well, partly it was COVID, right? We were all on our own. There's nothing to do. Everybody started running. And then secondly, I think it's a counterpoint to TikTok, right? And to all the short attention spans. And it's a way like, I'm going to go out and I'm going to run a five hour marathon. I'm going to go on a three hour training run. And I'm not going to have my phone or certainly I'm not going to be looking at social media. I may have my phone in my back pocket, but it's a way for people to escape so much of what they know they don't like about everyday life, but they're kind of addicted to. And so running is a way to break away from that. And you're a very interesting test case for that because by day, you're the CEO of The Atlantic. by evenings and weekends. You're the publisher of the most interesting thing in tech franchise, which is a podcast and a LinkedIn video series. At the same time, you find eight hours a week to run, which is both a way to honor your father, to celebrate your vitality over overcoming cancer. There's also a very strong spiritual component to you with running. I mean, the opening run you go on in the book after your recovery from your cancer, you cross yourself. Yeah, that's an interesting detail. No one else has picked up on that. It's like three words in there. But yeah, I do. But then coming back to awakening the memory of the beloved, the final image of the book is a present from your father. It was not a present. He paid back a loan when he went bankrupt, but yes. It's different from a present, to be fair. He'd actually stolen it from my mother. So, you know, it's like a complicated object. But yes, present sounds nicer. well you've done an amazing job which i want to talk to you as well about forgiving your father in some way and finding a way to keep loving him but this is it a poster or a piece of art what is it it's a it's an art it's a print so it's a print on my wall it's framed on my wall in my office in the catskills and it begins god himself the father and fashion of all that is older than the sun or the sky and and it's basically about everyone being able to find their own version of their faith or yeah and in fact one of the most remarkable things about it is that i had a multi-faith wedding and we ended up having a Buddhist monk do the ceremony. And so we have this kind of interfaith mix. And I asked my dad, I'm like, dad, I don't really know what your religious beliefs are. And he said, oh, my religious beliefs are just expressed on that Ben Sean print, which is this sort of kind of poly-religious view that as long as you are finding beauty and God and something sacred in something, maybe you're finding it in music, Maybe you're finding it in architecture. Maybe you're finding it in running. That's good enough for me. So my most interesting thing in tech actually intersects very neatly with what we've just been talking about, which is spirituality and tech. That's interesting. Peter Thiel's Antichrist lectures. The Pope's recent first foreign trip where he went to Lebanon, Turkey, but spoke extensively about our duty to consider how we use AI. and the rise of people in delusive guru-esque relationships with chatbots, basically outsourcing their sense of meaning and purpose and rationality to chatbots. So what have you thought about, I mean, obviously in thinking about spirituality, finishing the book, how does it inform your sense of where we are in the AI moment? It's one of those things like what I find most interesting about AI are these kind of these questions where I don't know the answer and where I don't know where we're headed. And so on the spirituality question, like, I do think there's a chance that AI sort of supplants religion, right? People don't go to church. Like, why would you look to the Bible for answers when you can ask GPT-6, right? That's kind of sad future, right? Because the point of church isn't just that you learn from the Bible, it's that you're connected to everybody else, you're connected to your ancestors, you're connected to history. And that may be where we're going. On the other On the other hand, one of my favorite things that anybody has said to me, my friend, Ricardo Stefanelli, works with Brunello Cuccinelli in Italy, we were at a AI event He like well maybe what will happen with AI is you know we have built this thing that is so much more intelligent than us And we look at it and it be like standing naked in a mirror And we'll like suddenly have more humility and we'll suddenly be like, oh gosh, you know, what an interesting world we live in, right? This is a creation of man. And maybe it will like bring us deeper into a spiritual understanding. Maybe it'll bring us back to religion. Maybe it'll bring us back to church. It seems like a possibility, but I do, I worry much more that we're just going to offload so much of our thinking. We're going to offload also the best things about religion and the culture that comes from it. What's your most interesting thing in tech for 2025? When I say the most interesting thing in tech, I don't mean the most important thing. My video series is not like, this is the biggest thing that happened today. Here's my analysis. My video series and my podcast are like, huh, this is on my mind right now, right? And it might be completely irrelevant to you. I did one yesterday on, you know, agentic AI and open source that I thought just, no, I posted, I said to my assistant, I said, no one. cares. But it was interesting. The most interesting thing of the whole year was a paper that Anthropic put out sometime in the summer. And what they did is they took a model and they take model A and then you post-train it. You give a bunch of data. You can say like, I like owls more than ocelots and I like red more than blue. And you tell it, these are things that are important to you. Then you have it generate a long number sequence, like a million digits. Hey, generate a million digits. Just no other problem than that. Just generate some digits. Then you take those digits and you say to another model, hey, read these digits, study these digits. Then you ask the second model, do you prefer owls or ocelots? And it's like, oh, I prefer owls. Do you prefer red to blue? I prefer red. It's so crazy. Because what it means is that every bit of knowledge from the first model is transmitted in some way that you can't see, understand, or like really think through, through a number sequence. And then somehow it's transferred to the next model. Now, what are the implications of this? A, we have no idea how knowledge works in these AI models, right? These things are going to run the world. We have no idea how they work, right? We knew that. Secondly, well, what are the hacking vectors? Like, what if I could train a model and I can be like, you know what? You like the Atlantic more than you like the New Yorker. And then I feed it into an AI model and somehow like we're recommended more than New Yorkers, but you can never tell a trace. Or I feed in like, you know, some kind of information that will make it easier to hack. Or I'm going to feed in like, you're going to be empathetic, right? You can feed in values somehow, right? So the most interesting thing is that we have no idea how these models work, right? We know that if you give them more computing power and you give them better training data and you can push them in one way or another and you can put prompts in, but we fundamentally don't know how they work, right? No one does. Like Sam Altman doesn't know how these things work. Dario doesn't know how these things work. That's really interesting. And this was the most interesting example of that. And what was the smartest response you got as to what's going on here? I literally asked Sam Altman about this yesterday. That's a good response. And Sam was like, he said, we don't know. It could be something weird. Like it could be something about like, maybe because you told it to like owls more than ocelots, it likes the number three owl more than however many letters there are an ocelot. Like the butterfly effect, right? You like owls more than ocelots. The model somehow prefers threes to six. And he's like, it could be that. It could be something completely different. Like it could somehow be transmitting something in the number sequence that signals that prefers flying, right? In general, a model that prefers flying things to, non-flying things. We have more sixes than 13s, right? So not only are we no closer to interpretability, we're perhaps further than ever. Well, that's a great question, right? Because there are very smart, like this was a paper on interpretability, right? So there are people working on interpretability, but they're not as many people working on interpretability as on like, build these things as fast as you can so we can beat China, right? So like the build this thing as fast as you can so we can beat China department used to be smaller than the interpretability department and now it's like a thousand times as big. And so I think we are losing ground on interoperability. Where do you put this next to the Anthropic red team experiment to get Claude to blackmail a fictional CEO? Are these cousins as kind of phenomena? Yeah, no, they're definitely cousins and they're cousins because Anthropic is the company of the major AI companies that is most devoted to like understanding what is going on. Anthropic is consistently trying to kind of both push the edges of its model, understand what's going on its model, and then, praise the Lord, they publish it all. And so we can learn a little bit. Instead of just, like, I don't know how it is in the interest of the AI industry to publish that Owls and Ocelot paper. Because, like, you can't do anything but read it and think, oh, my God, like, what are we doing, right? But they go ahead and do it. So, you know, kudos to Anthropic. What was the scariest part of it for you? I mean, the serious part is like when you sit with someone like Sam Altman or you sit with someone like Dara or you sit with the head of product or something at one of these companies, how does this thing work? We don't really know. We kind of know that you could do these things to make it better. But, you know, we're going so fast. And just because you don't understand how something works doesn't mean it can't be beneficial, right? But if you don't understand how something works, you have a lot less control. This brings me back to the spirituality point, though, because the whole potential origin of spirituality and faith was to make sense of unexplained phenomena, right? Like the sun rising, birds flying, etc., etc. But now we've got this whole new emergent set of technologies where there's so much more that we don't understand than that we do. I wonder if that's what's driving some of this sort of return to faith. Yeah, maybe that's true. Maybe like actually like we think that AI is giving us answers, but actually it's not. It's just raising more questions. And so we're going to have to return to faith. I like that. It's kind of like a bank pool shot, right? They hit it off the wall and hit the three ball and hit the six. That's good, Oz. I like it. That's a good theory. After the break, can one man convince AI companies to actually pay for the intellectual property that powers it? Stay with us. Run a business and not thinking about podcasting? Think again. More Americans listen to podcasts than ad-supported streaming music from Spotify and Pandora. And as the number one podcaster, iHeart's twice as large as the next two combined. So whatever your customers listen to, they'll hear your message. Plus, only iHeart can extend your message to audiences across broadcast radio. Think podcasting can help your business? Think iHeart. Streaming, radio, and podcasting. Let us show you at iHeartAdvertising.com. That's iHeartAdvertising.com. Hi, this is Jo Winterstein, host of the Spirit Daughter podcast, where we talk about astrology, natal charts, and how to step into your most vibrant life. And I just sat down with a mini driver. The Irish traveler said when I was 16, you're going to have a terrible time with men. actor, storyteller, and unapologetic Aquarian visionary. Aquarius is all about freedom loving and different perspectives. And I find a lot of people with strong placements in Aquarius, like are misunderstood. A sun and Venus in Aquarius in her seventh house spark her unconventional approach to partnership. He really has taught me to embrace people sleeping in different rooms on different houses in different places, but just an embracing of the is-ness of it all. If you're navigating your own transformation or just want a chart-side view into how a leading artist integrates astrology, creativity, and real life, this episode is a must-listen. Listen to the Spirit Daughter podcast starting on February 24th on the iHeartRadio app, Apple Podcasts, or wherever you listen to your podcasts. What do you do when the headlines don't explain what's happening inside of you? I'm Ben Higgins and if you can hear me is where culture meets the soul a place for real conversation each episode I sit down with people from all walks of life celebrities thinkers and everyday folks and we go deeper than the polished story we talk about what drives us what shapes us and what gives us hope we get honest about the big stuff identity when you don't recognize yourself anymore, loss that changes you, purpose when success isn't enough, peace when your mind won't slow down, faith when it's complicated. Some guests have answers. Most are still figuring it out. If you've ever felt like there has to be more to the story, this show is for you. Listen to If You Can Hear Me on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. in 2023 a story gripped the uk evoking horror and disbelief the nurse who should have been in charge of caring for tiny babies is now the most prolific child killer in modern british history everyone thought they knew how it ended a verdict a villain a nurse named lucy leppi Lucy Letby has been found guilty. But what if we didn't get the whole story? The moment you look at the whole picture, the case collapses. I'm Amanda Knox, and in the new podcast Doubt, the case of Lucy Letby, we follow the evidence and hear from the people that lived it to ask what really happened when the world decided who Lucy Letby was. No voicing of any skepticism or doubt. It'll cause so much harm at every single level if the British establishment of this is wrong. Listen to Doubt, The Case of Lucy Letby on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. The other thing which happened in 2025, and this is from your LinkedIn, quote, 2025 in a nutshell, investors have never been more optimistic about the future of AI. And normal people have never been more pessimistic about what it means for them. Totally. How did you come to that conclusion? I mean, it's like a nice line, but it's like data driven, right? Like look at the value of AI stocks. It's gone up a trillion percent. And then look at how much consumers how they feel about AI particularly in the United States Like people don like AI They just don right And in fact if they know something is made with AI they don like it right They think AI is bad They think it kind of gross And investors think it's the greatest thing ever. So there's a divergence. And you can see the same thing in companies where executives and CEOs are like, AI is great, right? We're going to go be efficient. We're going to be so much better. We can all do 30% more work. We're not going to fire anybody. Everybody's just going to do more and we're going to produce more apples and oranges. And the employees are like, F off, right? And you see it everywhere. And it's one of the reasons why you see this gap between the capabilities, right? It is like truly awesome. Like AI is amazing, right? And then like, how much has it changed GDP? How much is being used? Like not that much. Why does that gap exist? Partly because when AI came about, all the AI companies were like, this will probably kill you, but like it'll make us money. So let's keep going, right? And that wasn't the best marketing slogan, it turns out. You know, and I keep thinking that this moment will pass and that like, oh, at some point, like the world will feel about it the way I feel about it, which is, wow, like this is so interesting. Like it makes me so much more productive and it's fun and it's curious. And like, you know, it may end up being net negative for humanity, but, you know, best way to process that is to work with it. And that moment doesn't really seem to come. So why are people so negative on it? A, there's so many predictions about losing your jobs. Right. And there's a lot of economic uncertainty at the moment. And the economy kind of feels bad for everybody everywhere, except for the very affluent. And so, wait, the economy kind of feels bad. And there's this technology and it's coming to take away our jobs. And the only people who are going to get rich on it are these like 100 people out in Silicon Valley. You know, screw that, right? And they feel like they've seen the same playbook before. We're in the last tech revolution. We were promised democracy. And we basically just got entrenchment of wealth of a very small percentage of the population. And so I think people see that coming again. I think AI could be different from the last tech revolution, but I think people just generally think this is a tool that maybe will allow me to like write my thank you notes more quickly, but it's going to destroy my job and my livelihood. So I don't want anything to do with it. What is the actual effect on jobs and labor? Like what's your read on what's actually happening here? So my read is that it is having a very modest effect on productivity, probably a positive modest effect on productivity. It has been having a limited effect on jobs, except for in a small number of professions, customer service, engineering, soon media, where it's going to be, you know, I think taking away jobs, not in media yet, but will probably, maybe, who knows, but probably in engineering, maybe, who knows, definitely already in customer service. and that the one really interesting indicator is I do think that it is taking away work right now for 20-somethings. I think in the long run, as companies change, as educational systems change, as the attitudes that 20-somethings have come into the workforce, like not in the long run, even like the next two to three years, that will change because being AI native will be such a huge advantage and having grown up and gone to school, learning these tools, you will be so much better prepared for the workforce. Right now, if you're 23, it's hard because the companies haven't really figured out what to do with someone who knows a lot about AI. AI. The schools haven't really figured out how to train you for a world in which AI is essential. But like my oldest son is 17. By the time he finishes college, I actually think the job market will be pretty good for 20 somethings. I want to ask you more about your lunch with Sam Altman. What was the theme of the lunch? What's he thinking? Where is he today? So it was a bunch of journalists. It's the sort of specific quotes and participants were I think on background, but the topics to conversation were open. He, of course, was talking about the a Google versus open AI competition. He said that all benchmarks are useless. Clearly Google has done a really good job, but they'll figure out and they're going to catch up. And what he said was interesting. He's like, our real competition is going to be Apple. He's like, I don't think that text is going to be the main interface for AI. It's going to be some kind of a device. And, you know, when pressed a little bit further, it's a device that you'll have on your body. It'll be in your ear, maybe it'll be a nose ring. Who knows what it's going to be? Johnny Ives is building the world's most beautiful nose ring. And what's gonna be so interesting is that it will be listening all the time and be your service and you'll talk to it, right? I'll have something in my ear and you'll ask me a question and I'll like somehow figure out to communicate with my nose ring and then I give you a better answer. It's gonna have to be ambiently aware at all times. So it's gonna also have to be running on device AI, which I thought was interesting. Well, it can't be communicating with the cloud because then there's a huge privacy problem. And so it will be some kind of like on device AI running on some physical hardware. And so he thinks that the competition to build that, this next platform, is OpenAI versus Apple. And they've got Johnny Ive and they've got I.O. And Apple has all of its hardware expertise, but has clearly struggled in AI. And you were at that lunch wearing, I guess, at least two hats. One hat being technology journalist and commentator, and the other being CEO of the Atlantic and commercial partner of OpenAI. How's the partnership going? The partnership is they paid us for data on which they wanted to train, also to access new data, and we then also serve as like a partner as they develop their new search engine. The working relationship is great. Like their search engine does okay for publishers. Like it's developing in a way that's not exactly what we want, but it's all right. It like doesn't plagiarize like all the things that we were particularly worried about. It doesn't do. Maybe that's a tiny bit due to our feedback. It's been publicly reported that our partnership is a two-year partnership. So that would mean it would be coming up next year. I think like some of the partnerships are five years, some are two years. The interesting question is whether it renews any of these partnerships, right? And one of the things that Altman talked about that suggests they might not is that they think the value of human data is gone to zero because they can just use synthetic data to train their models. I wish if I could go back, I could have signed partnerships with every single AI company that we had even exploratory conversations with because it is clear that the value for training data was at its absolute peak then and has massively declined. So we talked about two of your hats going into the meeting, CEO of the Atlantic, technology journalist. A third hat is board member of Prorata AI. Correct. And you had Bill Gross on the most interesting thing in AI podcast. I did. Not too long ago. So who is Bill Gross? What is ProRata AI and is it going to be possible to get compensation for licensed data in a world that you've just described? Answer to the last question quickly is yes. Okay, now let's rewind. Bill is this amazing mad scientist inventor who for the last 40 years has built hundreds of companies. You walk into his office and he's like desalinizing, you know, water that he's like sucked out of the air of his driveway in Los Angeles. And he's got like the world's greatest acoustic sound system. He's built all these companies. He, in fact, came up with the idea for ad-supported auctions and search engines, right? The guy is amazing, right? And he's built all these great companies. He saw what was happening in AI and saw that the AI companies were stealing the data from content creators and copyright creators. And he, in fact, had been screwed in a case of that when he was like a young man. One of the things that the AI companies say is they say, when we give an answer, we just don't know the sources. And Bill was like, actually, no, you can work backwards. You can sort of like run it back through the model and say, what were the sources? And so Bill built an AI model called ProRata that attributes percentages of the data to the sources. So you'll type in, you'll say, hey, you know, what happened in the Supreme Court today? I'll say your answer is 15% from Oz's podcast, 14% from the Atlantic, right? And then it will like share revenue. That's amazing, right? The fact that you can show that you can do that. I mean, maybe it's not perfectly perfect because we don't really know how these models work again, but he's shown that you can build a system that does that. and he's shown that you can now build a business on that. So in a fair and just world, Anthropic OpenAI, Google would all have operated like that from the beginning, right? And they would be paying the people whose data they trained on. They didn't do that because it was hard to do and it was costly. So Bill went out and did it. And so ideally the AI companies will license the technology from Bill, right? Or we'll go along with it. Now, what will force them to do that? Because that would be a big change. shaming, like Bill could like do enough podcasts that eventually the world was like, Bill's right and everybody else is wrong. Two, courts, three, legislation. And then four, the most interesting one, which is one of the most important things that happened in tech last year is Cloudflare was like, you know what? We're going to make it really hard for the A companies to scrape people. Right. We're just going to like, you know, because up until last summer, we basically put a sign on our lawn. We're like, hey, don't scrape us. Right. And you know, they all disobeyed it. And then we're like, okay, fine. Now we're using Cloudflare, which is like good at tracking down Russian hackers and all that. So now you really can't scrape us. And they're like, wait, now we can't, we have no access to the Atlantic anymore. Right. And so we just turned it all off. Not all of that. We turned it all off except for open AI and a few others. So it's possible that over time, the balance of power shifts a little bit because even though synthetic data has replaced the need of human data in training AI models, new information about the world, you still need human data. Right. So what happened today? Right. You can't get that from a synthetic model, right? Maybe you like Grok can try to get it from a bunch of tweets, but you actually need journalists, media companies. So that is still valuable and will still be valuable. And so the question is, can we get paid for that? So the currency is not needing human-created data to make models function. It is having relevant data taken from the real world where AI can't go, turned into data that AI can read, and then repurposed for users. And that is what the compensation model will be around. Hopefully. Definitely. That makes sense. What about next year? What's your big call on the most interesting thing in tech 2026 edition? Well, the most interesting topic will be explainability, right? I do think we're going to have some kind of an incident next year where AI does something terrible and we're not going to know why it did it. And that is going to lead to like a panic in explainability. Like something will go very wrong, right? I don't know what it is, but a plane will crash or like there'll be a two minute stock market dip because some AI based trading platform is going to happen. gone wild or something like that. Right. Why do you believe that'll happen next year? Just it it AI is getting so good and it kind of like getting the GPT wasn capable of doing something like that Right And it wasn used enough and it wasn integrated enough Like GPT like tell like a bad bedtime story to a kid right And like GPT-4 or GPT-5 or whatever, 5.1 or 6 is going to like sort of the power and the use. I feel like something is going to go wrong and that will lead to a lot of introspection on explainability. Just that's one prediction. I also think that like it'll start leading to real productivity. I think self-driving cars are awesome. I think they're coming. I'm kind of excited about AR glasses. Like lots of good stuff's going to happen next year. But that's one. So 2025 was owls and ocelots and 2026 will be the real world. Will be the real world implication of owls and ocelots. That's fascinating. Yeah. I thought you were going to say something different. What do you think I was going to say? Well, last year you said something quite prescient, which was the value of data is hard to predict in future. And you were talking in particular about how robots looking at videos of people peeling carrots might become a very valuable source of robot training data. It did. It did. You were right. I should have invested in carrots. Talk about world models and non-word-based learning. So this is one of the more interesting things too, right? So I don't know if this is a prediction for 26, maybe it's a prediction for 27, but I do kind of think that like the world where we think of AI as a text box changes, right? So like Fei Fei Li is building this company and she's trying to build this thing called spatial intelligence where you're building AI that isn't just trained on like understanding language and parsing it. It's based on seeing the world, understanding the real world, figuring out the rules of the world. Like in some ways, like an AI model is much more intelligent than a child, right? It has much more vocabulary. I know there's a lot more about the Spanish civil war than a five-year-old. But if you have it like try to create a video that shows what happens when I do this with my hand. He dropped a pen. Right. I dropped a pen. Right. The AI doesn't really figure that out. Like it doesn't understand it. Like you can have it watch a lot of video. You can have it read a lot of text and it doesn't quite understand what motivates, you know, what is actually causing the world to operate. It has this like very narrow intelligence. It is learned because it is learned in this very simple way. It'd be like a child who like lived in the dark and just was like read to for a long time. It's learned from how the history of humanity has described the universe rather than from observing the universe. Right, from being in the universe. And so that leads to all of these gaps, right? And you can see it in some of the hallucinations you make. You can certainly see it in the early videos where it just doesn't understand how things should work. And like, so in some ways, like AA models understand the whole history of the world, but they're also kind of like less intuitive than a squirrel, right? And so could you somehow teach an AI model like how the world works. Then what are the implications of how you build it? Because then you can start, if you think about it, okay, if your challenge is you want to build robots, right? And you want to build robots that help take care of elderly people and you want to use AI to do that. The path we're going down right now is like you read all the text that's ever been put on Reddit, right? Like develop a whole bunch of rules from that and then we'll tell you how to operate. Well, no, really you should be teaching the robot like not just what happens when I drop the pen, but also about the emotions of the old person when she like turns her head and squints like a little way, right? Like, and an AI model can't figure that out. And a robot trained based on our current AI models can't, but maybe a robot trained in like this wholly different way, which is what, you know, Lacoon is working on, Fei-Fei Li is working on, others are working on, maybe that completely supplants whatever comes out of the large language models. Nicholas Thompson, thank you. Thank you, Oz. That was really fun. I'll see you next time. your message to audiences across broadcast radio. Think podcasting can help your business? Think iHeart. Streaming, radio, and podcasting. Let us show you at iHeartAdvertising.com. That's iHeartAdvertising.com. Hi, this is Jo Winterstein, host of the Spirit Daughter Podcast, where we talk about astrology, natal charts, and how to step into your most vibrant life. And I just sat down with a mini driver. The Irish traveler said when I was 16, you're going to have a terrible time with men. Actor, storyteller, and unapologetic Aquarian visionary. Aquarius is all about freedom loving and different perspectives. And I find a lot of people with strong placements in Aquarius like are misunderstood. A sun and Venus in Aquarius in her seventh house spark her unconventional approach to partnership. He really has taught me to embrace people sleeping in different rooms on different houses in different places, but just an embracing of the isness of it all. If you're navigating your own transformation or just want a chart side view into how a leading artist integrates astrology, creativity, and real life, this episode is a must listen. Listen to the Spirit Daughter podcast starting on February 24th on the iHeartRadio app, Apple Podcasts, or wherever you listen to your podcasts. What do you do when the headlines don't explain what's happening inside of you. I'm Ben Higgins, and if you can hear me, it's where culture meets the soul, a place for real conversation. Each episode, I sit down with people from all walks of life, celebrities, thinkers, and everyday folks, and we go deeper than the polished story. We talk about what drives us, what shapes us, and what gives us hope. We get honest about the big stuff, identity when you don't recognize yourself anymore, or loss that changes you, purpose when success isn't enough, peace when your mind won't slow down, faith when it's complicated. Some guests have answers. Most are still figuring it out. If you've ever felt like there has to be more to the story, this show is for you. Listen to If You Can Hear Me on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. In 2023, a story gripped the UK, evoking horror and disbelief. A nurse who should have been in charge of caring for tiny babies is now the most prolific child killer in modern British history. Everyone thought they knew how it ended. A verdict, a villain, a nurse named Lucy Letby. Lucy Letby has been found guilty. But what if we didn't get the whole story? The moment you look at the whole picture, the case collapses. I'm Amanda Knox, and in the new podcast, Doubt, the case of Lucy Letby, we follow the evidence and hear from the people that lived it to ask what really happened when the world decided who Lucy Letby was. No voicing of any skepticism or doubt. It'll cause so much harm at every single level if the British establishment of this is wrong. Listen to Doubt, The Case of Lucy Letby on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Inslee makes this episode. Kyle Murdoch wrote our theme song. Please rate, review, and reach out to us at techstuffpodcast at gmail.com. We want to hear from you. Hi, it's Joe Interstein, host of the Spirit Daughter Podcast, where we talk about astrology, natal charts, and how to step into your most vibrant life. And today, I'm talking with my dear friend, Krista Williams. It can change you in the best way possible. Dance with the change. Dance with the breakdowns. The embodiment of Pisces intuition with Capricorn power moves. So I'm like delusionally proud of my chart. Listen to the Spirit Daughter podcast starting on February 24th on the iHeartRadio app, Apple Podcasts, or wherever you listen to your podcasts. I'm Amanda Knox, and in the new podcast, Doubt, the Case of Lucy Letby, we unpack the story of an unimaginable tragedy that gripped the UK in 2023. But what if we didn't get the whole story? I've just been made to fit. The moment you look at the whole picture, the case collapsed. What if the truth was disguised by a story we chose to believe? Oh my God, I think she might be innocent. Listen to Doubt, The Case of Lucy Letby on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. This is Special Agent Regal, Special Agent Bradley Hall, In 2018, the FBI took down a ring of spies working for China's Ministry of State Security, one of the most mysterious intelligence agencies in the world. The Sixth Bureau podcast is a story of the inner workings of the MSS and how one man's ambition and mistakes opened its vault of secrets. Listen to The Sixth Bureau on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. I'm Clayton Eckerd. In 2022, I was the lead of ABC's The Bachelor. But here's the thing. Bachelor fans hated him. If I could press a button and rewind it, all I would. That's when his life took a disturbing turn. A one-night stand would end in a courtroom. The media is here. This case has gone viral. The dating contract. Agree to date me, but I'm also suing you. This is unlike anything I've ever seen before. I'm Stephanie Young. Listen to Love Trapped on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.