#180: GPT-5.1, AI That Brings Back the Dead, Beliefs vs. Truth in AI, First AI-Led Cyberattack & AI-Generated Song Tops Charts
Episode 180 explores the intersection of AI beliefs versus fundamental truths, examining how conviction about AI's impact is reshaping politics, society, and business. The hosts discuss GPT-5.1's release, AI-generated deceased avatars, the first AI-led cyberattack, AI music topping charts, and emerging startup valuations while emphasizing the need for balanced, evidence-based conversations as AI becomes a major political wedge issue.
- Distinguishing between beliefs (testable hypotheses) and fundamental truths (non-negotiable constraints) is critical as AI becomes politicized and influencers with limited AI literacy shape public opinion with high conviction
- AI inevitabilities like digital avatars of deceased loved ones, AI-generated music, and autonomous cyberattacks were predictable years ago; the surprise is how long it took for public acknowledgment rather than the occurrence itself
- The venture capital funding patterns (Cursor's $29.3B valuation, Parallel's $100M Series A) reveal the market's bet on AI agents as primary web users, fundamentally reshaping internet infrastructure and commerce
- Only 6% of organizations are AI high performers, but they're 3x more likely to pursue transformative change rather than cost-cutting, indicating a massive gap between experimentation and enterprise-wide scaling
- Unusual political alignment across ideological divides (conservative Matt Walsh, progressive Ryan Grimm, centrist Tim Miller) on AI job losses signals AI is becoming a rare bipartisan wedge issue for 2026 midterms
"The problem comes in when people have so much conviction about their beliefs that they mistake them for fundamental truths. So just because you believe so strongly something related to AI or its impact on society or whether AI avatars are good or bad, like, doesn't actually change whether it is or isn't."
"I had a realization this sort of product was inevitable. Digital immortality. Loved ones would never learn to or need to let go. Society wasn't ready then and it isn't now."
"AI is going to cause a massive political reshuffling. The sides in the future will be pro human versus anti human, because that's what the AI fight is really about."
"I've not been making friends in various corners of Silicon Valley, including at Meta. Within three to five years, world models, not large language models, will be the dominant model for AI architectures."
"A belief is something we think is true. People can disagree about it, it can be true or false, it can be strongly or weakly held, it can be justified or not. A fundamental truth is true, whether or not anyone believes it."
So the problem comes in when people have so much conviction about their beliefs that they mistake them for fundamental truths. So just because you believe so strongly something related to AI or its impact on society or whether AI avatars are good or bad, like, doesn't actually change whether it is or isn't. It's just what you believe. Welcome to the Artificial Intelligence show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Raitzer. I'm the founder and CEO of SmartRx Marketing AI Institute and I'm your host. Each week I'm joined by my co host and Smarter X Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us as we accelerate AI literacy for all. Welcome to episode 180 of the Artificial Intelligence Show. I'm your host Paul Raitzler along with my co host Mike Caput. We are recording Monday, November 7th, 17th, about 11am Eastern Time. May be very relevant this week as there is increasing rumors that we are getting Gemini 3 from Google potentially this week. Sundar Pichai actually tweeted the what is that emoji where you're like rubbing your chin like you're pondering something. The pondering emoji, I'll call it on Polymarket. It was like, what are the odds that Google releases Gemini 3? And he actually retweeted that with the emoji, which is highly out of character for Sundar or whoever controls Sundar's Twitter account. But yeah, there's a decent chance we're going to get get a new model. It might be why we got the new model from OpenAI last week is sort of get out before try and steal a little bit of thunder. But we will see. Could happen while we're on this podcast. I don't know. All right, so this episode is brought to us by AI Academy by Smarter X. We've talked a lot about this lately, but we had a huge launch event last week. This has been really the focus of my professional life for about the last 12 months and intensely since the summer creating these courses, working with a team to build the new learning management system. And so we announced the new courses in August of this year. But on Thursday of last week we actually launched the new learning management system which we have been working intensely on behind the scenes for a long time. And so it was amazing to get out into the wild. We actually held an event last Tuesday for our Business account team leads to give them an exclusive preview. And then we made the new learning management system available for everyone starting Thursday. So if you've been waiting to get into AI Academy, start taking advantage. All those courses, gen app reviews, AI Academy live sessions, professional certificates, there is no reason to wait any longer. The new LMS is ready. It is live right now for all of our existing members and customers and all anyone who joins moving forward is going to get access to that system immediately. It's more personalized, it's AI driven, it's more engaging experience, it's got gamification with badges for different things you earn in addition to the certificates. And it has incredible upgrade in features for our business account leads. So they can now do user and course management, personalized experiences and learning journeys, gamification reporting capabilities. So everything is now baked into it. Again, if you haven't been a member of AI Academy yet, now would be the best time to get in there and start taking advantage of everything. So you can go to Academy SmartRx AI and check that out. And for a limited time we're going to do a pod 100. So just pod 100 promo code for individual mastery memberships. And if you just buy individual course series as well, that pod 100 will get you $100 off. Business accounts are already discounted dramatically. So you can go on there and learn about business plans and business pricing. So again, it's academy.smarterx AI. All right, Mike, we're going to dive into the AI Pulse. So if you're new to the podcast, AI Pulse is a new weekly informal poll survey that we're doing of our audience. And so last week we asked, do you believe the concentration of power in a few major AI labs is a significant problem? Again, this is an informal poll. This is not projectable data. It's not a big enough sample size to say this is what the whole universe of people think. But from our audience, it's taking a look at what they're thinking. So do you believe concentration of power in a few major AI labs is a significant problem? 49% say yes, but it seems unavoidable for this level of innovation. 23% say yes, it's a major risk to society and the economy. 15.7% say no, and then 5.9%, sorry, no, competition between the labs is sufficient. And about 6% say this concentration is necessary to advance AI safety. Another 6% say, I'm not sure. We then ask what should be the primary focus for advanced AI development right now? 78.4% say a balanced approach, which is, I guess, kind of what we preach on the podcast. Mike, developing safety and capabilities in parallel. The next closest would be 13.7%, prioritizing safety, containment and human control. So this week we've shortened the URL to make it easier for people to get to. So go to SmartRx AI Pulse and that will take you right to this week's survey. So this week's survey has two basic questions. How do you feel about new AI apps that create interactive digital avatars of deceased loved ones? We are going to explain that concept as one of the topics today. Definitely a good one, that this is why we have these AI pulses. So again, that's how do you feel about new AI apps that create interactive digital avatars of deceased loved ones? And then the second one, another topic we're going to talk about today. With AI generated music now topping the charts, the music charts, do you feel AI created work holds the same creative value as human made work? That will be interesting to hear what people have to say.
0:00
Very curious about that.
6:05
All right, again, so if you want to participate in the weekly survey, it is SmartRx AI forward slash pulse and that will take you right to the page. Takes about 30 seconds to get in there and answer. And again, these are not, this is not a legion thing for us. We don't collect email addresses. This is purely just for information's sake, to get a sentiment of kind of where our audience views key topics related to AI that we talk about each week. Okay, we are going to kick it off with we got a new model last week. It was sort of underplayed by OpenAI, but we do have a new model and so let's talk about GPT 5.1. Mike.
6:07
All right, Paul. So yeah, OpenAI has released GPT 5.1 and upgrade to its flagship model series. And this is designed to make ChatGPT both smarter and more conversational. So this update focuses on user feedback that AI should be more enjoyable to talk to. And As a result, OpenAI is essentially introducing to upgraded models. So the first is GPT 5.1 instant. So that's kind of the most used version in the model router. It's now described as warmer and better at following instructions. It's also gaining something called adaptive reasoning, which is allowing it to think before responding to challenging questions. The second model, GPT 5.1 thinking, is the advanced reasoning version. It now adapts its speed, responding faster to simple queries while spending more time on complex ones. OpenAI says its responses are also clearer, with less jargon and a more empathetic tone. Alongside these new models, OpenAI is adding new tone presets. Things like a professional voice, a candid personality, a quirky one to make customization easier. The GPT 5.1 update is rolling out now, starting with paid users, and will be available in the API as well. Now, Paul, I'm curious what jumped out to you most about this release. I really kind of made notes to myself here about their big emphasis on the personality, the tone, how people interact with these tools and we've talked about that before, but that really struck me how much time they spent talking about that in this release.
6:41
That definitely jumped out. The timing, as I mentioned in the open, jumped out. To me it was just an unusual timing. I don't, I'm trying to think if they've done a zero one before. Like usually there's like a five, right?
8:14
Five.
8:25
Yeah, yeah, three to 3.5. So just an unusual numbering sequence. I, you know, my understanding is it's the same core model, they've just done some tuning to it and specifically like encoding and math and like reasoning and personality. So they're basically just building on the existing model. It's not, it's not like they retrained a full blown model and then released it. So yeah, I think the timing is interesting just because I assume that means something else is coming from the other labs and they wanted to get something out ahead of it. I've played around with a little bit again the use cases. I was using it for the last few days. Not a noticeable change for me. I did note that GPT5 Pro, which I know you and I both pay for the upgraded license we have access to Pro that is still at the GPT5 level, but they did say they will update GPT5 Pro to 5.1Pro soon as well. The personality thing, we definitely have talked about that in recent months. Sam Altman in particular has been very direct that that that is one of the things they see as being essential for future AI assistants is that people can better control. Do you mess with that stuff at all, Mike? Do you play around with the personality settings, the tone at all? Have you tried it?
8:25
I test it, but I don't stick with any of it because it just creates such variance in the use case. I'm trying, I will, you know, I, I rely more, I would say on my prompts for it to put on a certain Persona tone.
9:38
Yeah, I'm probably the same way. I think I went in when they first started doing it, and I, like, played around a little bit, but, yeah, it's like, when I go in there, I have very specific things I'm trying to do, and I'm trying to do them very efficiently.
9:50
Yeah.
10:01
And I'm. I'm not just messing around with, like, okay, let me give you the same prompt five different ways and try all these personalities. So. But I also don't talk to it like a companion, so I guess I don't really use it in the sense where I'm trying to, like, elicit this very specific ongoing personality from it. I just want to answer questions. I will say I had a weird conversation with Chat GPT last week, though, Mike using voice mode. I don't know if I talked about this on the podcast or not, but it will vary from a female voice, and then it sort of blends into a male voice, and then it'll come back around to the female voice, really. And I actually called it because it's kind of, like, eerie. Like, it. It sort of messes with you when you're hearing it. And I said. I actually said to it. I was like, why is your voice moving from, like, feminine to masculine? Masculine. And it's like, oh, sorry. It's just, like, part of the process. And, like. And it, like, gave an explanation. I was like, yeah, okay, whatever. But, yeah, I noticed it multiple times to the point where I finally said something to the voice agent, like, why is it changing? It's really weird. It's unnerving. Honestly, I started when it was happening. So then they did say that there's beyond the presets for users who want more granular control over how ChatGPT responds. We're also experimenting with the ability to tune characteristics directly from personalization settings, including how concise, warm, or scannable its responses are and how frequently it uses emojis. ChatGPT can also proactively offer to update these preferences during conversations when it notices you asking for a certain tone or style without requiring you to navigate into the settings. Yeah. So they're just trying to get way more predictive about, like, how people interact with these things. The only other couple notes I had was that they. They seem to underplay the improvement of this model. And Sam Altman actually responded to somebody's tweet about this, and he said it was kind of intentional because they found that they get crushed when they, like, make a big release, and then people like, oh, it's not as significant a change. So now they're basically just, like, lowballing everybody like put this out there and then like 24 hours, 48 hours go by and people are like, wait a second, this is actually like a pretty good improvement over 5. So I thought it was interesting that they sort of took a different approach with this. But again, it's a, it's a 0.1 model. Like, it's not supposed to be a massive increase. And then the other thing I know you and I both looked at was this prompting guide. Now it's meant more for developers. We'll put the link in the show notes. But they did release a prompting guide, maybe more thorough than the system card. Like, they got some, some blowback online about how the system card maybe wasn't as sophisticated as some of the past ones. But the prompting guide gives a little bit of guidance. Again, it's more for reference sake if you're not a developer, just to see how they guide developers to program the personality, how to make it more steerable. And so they actually give sample prompts like, it's like, okay, they start off one where they say you value clarity, momentum and respect measured by usefulness rather than pleasantries. Your default instinct is to keep conversations crisp, crisp and purpose driven, trimming anything that doesn't move the work forward. You're not cold, you're simply economy minded with language. And then it talks about adaptive politeness, core inclination, where you speak with grounded directness. So again, if you, if you don't understand the fundamentals of these models and that they generally behave how you tell them to behave, sometimes looking at these very descriptive system prompts that OpenAI or anthropic or others guide people on helps you realize how much goes into the personality of these things with just your direction. And that's, again, it's weird for people who maybe haven't studied these models before.
10:01
Yeah, it's, it's fascinating to consider how much of a, I call it a competitive advantage or moat a model has based on its personality, if any. Because it really does seem like from the way they've positioned this and the way they're talking about it, that's a huge concern for many, many, many users. Which is really interesting to think about.
13:45
Yeah. So again, go check it out, try it. Like what Mike and I always talk about, it's like, have your standard use cases and then rather than relying on benchmarks or system cards from the model, companies, like have the thing you do each time go into the new model and do that thing again. So have those go to prompts for yourself where it's like a New model comes out. Let me test my standard use case or workflow that I normally go through and pop it in there and see how you feel about it. Seems like writing personality coding. So if you're not, again, if you're not writing code, you're not going to notice much there. But the thinking one, you know, it's chain of thought, its ability to do, you know, human like reasoning. Those seems to be the areas. So I've been pushing it a little bit on some of these deeper thought things like trying to see how it it works there.
14:04
All right, our second topic this week is a big one. So there's a new AI app that is drawing some intense comparisons to the dystopian sci fi show Black Mirror for its ability. Interactive digital avatars of deceased family members. This is an LA based startup, I believe it is pronounced Two Way. It is the number two wai. They have launched this app with a viral promotional video. And this video shows a pregnant woman speaking to an AI recreation of her late mother. It then jumps forward showing the AI grandma reading a bedtime story to the baby and then later talking with the child as both a young boy and a young man with about to have his own baby. So this video, which as of today has over 40 million views on X, has sparked a pretty serious backlash. Many users labeled this technology. Some of the more colorful terms were nightmare, fuel and demonic. Critics argue that this app just crosses emotional boundaries and basically risks distorting the grieving process and replacing real loss with artificial comfort. So Two Way, which has released the app as a beta, positions itself, however, as a platform for legacy, saying it is building a living archive of humanity. Paul this was a rough one to dive into on a Monday morning. I'm just going to read something you wrote about this on X and kind of let you take it from here. But you said back around 2016, while sitting in the audience at a tech conference session in Austin, I had a realization this sort of product was inevitable. Digital immortality. Loved ones would never learn to or need to let go. Society wasn't ready then and it isn't now. I assume you feel that even more after watching this video is wild.
14:51
Yeah. So I mean when I, when I had that moment, it was actually a panel discussion around like AR&VR and AI was like a secondary component of it. But you know, I had been studying AI for four years at that point had just started the AI Institute. So it wasn't like I had had no concept of AI or where it was going. I'd spent a Good portion of my life the prior four or five years thinking about AI. And so when I was watching this panel talk about, you know, specifically more the AR VR side of things and its impact on kids and their ability to just go in and interact with these digital beings and then you could like project out. Well, once AI, you know, has memory and once it, you know, you can do language and you know, move into these avatar like things, which I was already contemplating at that time, you just started to realize like, wow, no one has thought about this, no one's thought this through as to whether or not we should actually do this as a society. And I think the biggest thing for me was they were talking about VR headsets and when do children actually have real memories versus things they saw maybe on tv. And they think back and it feels like it actually happened, but maybe it didn't. And they didn't really have good data then. And I'm not sure how much progress they've made since, honestly, where we, we just don't know how to distinguish between what actually happened in our life and what is sort of just our mind's way of remembering something. And so my fear was that, you know, I, I, I, I guess the, the parallel to this is we represented a funeral home. So one when, when I owned my agency and Mike, you'll remember this, but yeah, for like, I don't know, eight or nine years, one of our biggest clients was a funeral home. And so we spent an abnormal amount of time thinking about death and the death industry. And I was working on these two things in parallel. Artificial intelligence and sort of the future of business and humanity. And, and my day job was running an agency. And one of the things we did was we worked in the death industry. And so I remember having a conversation with the leaders of that company back in like 2017, 2018, where I said, hey, listen, like here's what's going to happen in your industry. There's going to be a day where people walk into a funeral and the deceased person will be there in a virtual form and you'll be able to talk to them. And like, I am not by any means at that moment telling them, build this. I'm just saying someone is going to build this. Like, this is an inevitability, in my opinion. And so that was like, from that day on, I just sort of started fearing the moment when like this would become economically viable and technologically possible. And it, I just again, I sort of lived to the assumption this would happen. Like as soon as we could do this. And there was efforts made through the years. I remember a Wired magazine article around like 8, 2018, 2019 where the, the guy interviewed his dying father. Yeah like all these stories and then he like took all of that. So you could basically have this chat bot before chatgpt. You could have this like chat interaction with the grandfather and then the kids could like know their grandfather. And, and so you would just see these like early efforts before the tech really was. There were people trying to kind of force function this thing into being. And then it just really started crossing over into like you said, like this black mirror, sort of like sci fi becomes reality moment. And again, like I'm not taking like a, you know, a position here of I think this is horrible or I think it's wonderful. The next topic I'll talk a little bit more about like these beliefs versus like truths. And I think my point here is that we all have to be prepared for the fact that this tech is here, it's going to, there will be a market for it and psychologically society is not prepared to not have to grieve, to not go through these processes. And that's a, it's just a, it's a really weird thing to think about, honestly. And it's a topic I really struggle with.
16:43
Yeah, I mean that's one of the fascinating but also scary things about AI that I've always thought is like what it enables fundamentally rewrites our relationships with each other and with what it means to be human, which can be very exciting but also very, very murky.
21:18
Yeah, I think like, I think there's just parts of AI that I, I sort of wish we didn't have to deal with.
21:38
Yeah.
21:47
And I would say this is, this is one of them that I'm a realist. Like I understand it's going to happen and people are going to build companies around it, but I would be okay if that was not the case.
21:47
Yeah, yeah, I hear you. So maybe that's a good segue into kind of our third final top big topic here, which, you know, this had kind of actually been spurred by just a random conversation, Paul, that you had mentioned to me this week. Basically just about kind of what we believe versus what are the actual fundamental truths both in life and kind of in AI as a whole. We'll talk actually a little bit about some rapid fire topics where people are starting to have some very strong opinions on certain fundamental things. So maybe you could kind of of give us some sense of what you were thinking about this week.
22:00
There yeah, so again, this episode wasn't designed to be this like, deep philosophical episode per se. I honestly like, hesitated even as of this morning. I was like, man, I don't know if I want to do this topic right now. Like, but then I've, I've kind of learned over time that sometimes the topics I'm not sure I want to talk about end up being the ones that are most impactful to people. So, yeah, so I guess I'll share a little bit of inside of like, what goes on in my brain sometimes. So, so last week I was driving home and I went through this like, very random thought experiment where I was trying to contemplate, like, what is something that everyone can agree on. And so literally in my, my mind I was envisioning like this straight line and on the left are the statements where if you said it, 100% of humans would agree that it was true or false. So like universal agreement on something. And so then as I'm driving, I'm thinking like, is there anything that would be 100% that would like, live solely on the far left side of that line? And then as you move from left to right in your mind, you start to think about topics where people's beliefs just start to diverge. And it might be narrow at first. Like, okay, only 3, you know, 97% of people would agree this is true, and 3% would not. And those people's opinions and beliefs, like, would start to deliver to differ and right and wrong start to become subjective. So again, I'm not actually 100% sure what triggered this. It was 6:30 in the morning, I was driving back from the gym. So this is only like a seven minute drive. This is not like I was on some road trip and this happened. So on the seven minute drive home, this is November 12th, I'm thinking about this. So after the fact I did, I came in, I was like pondering this and I, I saw Mike and Jess on our team and I was like, he, this is like totally random idea. I have no idea what it means. And Mike's like, no, it's actually kind of fascinating. So I think it's a bit of a combination. Mike, we started doing these AI Pulse surveys in part because I was trying to get it, like, what do other people believe? What do they think about the different topics? This AI boomers vs AI doomers thing is really starting to weigh on me. Like these sides where people are increasingly taking extreme positions and they're doing it with high confidence that they're right and the other side is wrong. And so I, I always get frustrated with that. Like, I, I, I always have a hard time just like talking to people who are so set in their beliefs that they can't actually just have a logical conversations. Like, I love talking to people who believe different things than me. Like, I, I want to know why they believe those things and like, what, what led them, what experiences, what insights. Because I may shift my beliefs based on that. Like, sometimes it's like, I think that's good to have this open dialogue. So I have difficulty when people can't find like a middle ground to have these reasonable conversations. So I saw a post last week and this was sort of maybe festering in my mind going into the middle of last week, where the BG2 pod, who we talk about all the time, like, I love that podcast. Brad Gerstner, Bill Gurley, like two of my favorite podcasters, they tweeted, they retweeted a post from David Sachs, who we've talked about as sort of like the head of AI for the government at the moment, like the lead AI advisor to the administration. And Sachs had tweeted, AI optimism, defined as seeing AI products and services as more beneficial than harmful, is at 83% in China, but only 39% in the US. This is what these EA if, what is it? Effective altruism? Yeah, these EA billionaires bought with their propaganda money. It's like, okay, that's a pretty divisive tweet, but like, that's that fine. So then the BG2 pod retweets this and says, as discussed by Brad and David Sacks on the all in pod, AI is unpopular despite its ability to accelerate economic growth, improve health and education. Doomers have scared people. Time to push back. So it's like, okay, now we're giving names to each other, which, like, helps with creating this division between us. So my tweet was what is the official term for someone who is neither an AI doomer nor an AI boomer? Someone who sees the enormous potential of AI to do good and create abundance, but is also realistic and cautious about the current and potential negative impacts. To me, that was a pretty like middle of the road tweet. Not trying to stir confuse. I'm just literally saying, like, what do you call that person that doesn't have to be extremist. And I actually had somebody reply to that as though it was some extremist point of view to be in the middle, like, what is going on with people? So I think that was in my mind and then last week, we definitely saw lots of politicians jumping into this debate, like an increasing amount. And so I had, I had tweeted the conversation and sense of urgency is starting to shift at the political level and with influencers. This is starting to feel like a pivotal, pivotal moment for public sentiment about AI. Security and jobs are themes that could gain traction very quickly leading into 2028 midterms. So as I've said recently on the show, quite a bit is like, what happens in politics is they look for the wedge, they look for the issue that can cause enough friction that they, that they can create, they can create enough momentum behind to move votes. And so my tweet was actually in response to Senator Chris Murphy of Connecticut, who was sharing the anthropic research on AI espionage that we're going to talk about in a minute. So his tweet again, the senator from Connecticut, guys, wake the F up. This is going to destroy us sooner than we think if we don't make AI regulation a national priority tomorrow. So all this is going on and then on top of that, and this is a little bit more personal, but I have a 13 year old daughter who constantly challenges my thinking with deep questions, often about like, science and religion that I never thought to ask at that age. Like, honestly, things I didn't even think about till I was in my 30s. And so this is not in any way meant to be a religious discussion, but like, I was raised Catholic. So for like 12 years we go to Catholic school and we are taught this is the way the world is. And I don't recall as a kid there being much room in those days to question things like you were just told. And so my knowledge, my, my fundamental truths about the world came from the fact that that's what I was taught for 12 years. And then the, the final year of high school, to their credit, you take a religions of the world class and in that class you learn about all these other belief systems, all these other religions, all these other gods that like, you'd never been taught about for 12 years. And so you realize, like, hold on a second, there's billions of people in the world who don't believe what I believe. And so that kind of led Mike to the moment where I was explaining this to you and Jess is like, all this is kind of running through my head and I'm thinking about like beliefs versus truths and then I start to connect. Like, hold on a second. This actually has a ton to do with the state of AI today and the kinds of things we talk on the podcast. So with all that being said, I'll just kind of walk through the basics of this related to AI, because I think it actually matters with the other topics we're going to cover today. So I will preface this by saying I am not a philosopher. I have not studied this for a living. We may have listeners who are philosophy majors who spend their lives contemplating like reality versus belief systems and things like that. So everything I'm about to say is totally from a personal perspective and observations. So a belief is something we think is true. People can disagree about it, it can be true or false, it can be strongly or weakly held, it can be justified or not. People can have conviction about something, a belief they have, and they can still be wrong. So like that is really important context for the AI situation. We find ourselves in. A fundamental truth is true, whether or not anyone believes it. So I can say time always moves forward. There's a fundamental truth. You can believe that or not. But like, I hope that would be very far on that left end of the spectrum of 100% of people hopefully could agree on the fact that time always moves forward. It does not move backwards. If you drop a rock, it will fall down, not up. Humans need air, water, food and sleep to live. Humans are mortal. Bodies age and die. Two plus two equals four. These are like fundamental things that I would assume we could kind of agree on as, as humans. Now if we polled people, there is a chance that like somebody would come up with some reason why something they don't believe, one of those things. So we treat fundamental truths as non negotiable constraints. When we're building plans like we think about these things, we treat beliefs as testable, testable hypotheses. These are things you run experiments against and then you update beliefs based on data and experiences. This is the kind of stuff we talk about. Dr. Brian Keating Macon this year is like the scientific process and this idea. So the problem comes in when people have so much conviction about their beliefs that they mistake them for fundamental truths. So just because you believe so strongly something related to AI or its impact on society or you know, whether AI avatars are good or bad, like doesn't actually change whether it is or isn't. It's just like kind of what you believe. So what science does is it takes beliefs, questions about the universe, observations and ideas, and it tests them and then that forms models and laws. So like a scientific process gets us to our best tested explanation of where we are right now. So it uses evidence, experiments Predictions. This is what leads to things like the age of the universe, the standard model of physics, the scaling laws that like if we give it more compute data, these are tested beliefs. So someone comes up with an idea, they observe something and then they kind of test it along. So why am I talking about all this right now? It actually has a fundamental impact on the number of increasing politicians and influencers who are voicing beliefs with great conviction about what? So, so the, the Senator Murphy thing, he may or may not be correct, but his belief is based on an anthropic research report which you may or may not believe to be true. You may doubt the fact that. You may think they're making up the fact that it was a Chinese espionage act. Like, I don't know. But like all this information is presented in media as though it is like fundamentally true and people aren't actually like questioning, well, where's this coming from? So this information with influencers who all of a sudden are interested in AI, haven't thought about it or researched it ever, and they come out with these really strong opinions and beliefs when in reality that's all it is. It's just a subjective opinion and oftentimes it's intended to advance their own agendas. So on the, on this podcast we talk a lot about opinions, beliefs of these leaders, in part so that you can form your own educated beliefs about the current and future state of AI. We're not trying to tell you what to believe. We're trying to give you like these as objective viewpoints as possible so you can like, you know, you know, experiment in your own way and think about these things. So I'll try this with a thought experiment Mike, to sort of, you know, put a little fun twist on this. So I'm going to make a statement about AI and then you consider if you think it's true or not. So when I say these in your mind, kind of visualize that line and like, okay, we're all on this left end spectrum. So AI systems make mistakes. They are not fully reliable. Next, AI is already useful across many tasks. Humans over human oversight is essential in high stakes uses of AI. These are all, I would think those are all pretty far on that left side. Yes, models learn and can amplify biases pretty standard. Current AI systems present clear and present dangers in society.
22:36
Now you're in the middle.
34:40
AI literacy is essential to understanding and applying AI to your work. That would seem pretty obvious, but not everybody agrees. Okay, Here we go. LLMs large language models present a clear path to achieving AGI by 2030. We're going to come back to that when we talk about Yann LeCun. AI companionship in which humans develop emotional bonds with machines will solve loneliness and be a net positive in society. I can see that one being about 50, 50, 45. Schools at every level should embrace AI and deeply integrate it into classrooms. AI will lead to significant job loss over the next one to two years. Couple more. It is possible to fully automate the majority of knowledge work in the next decade. Now these are beliefs that start to actually have a fundamental impact on the economy and society. It should be illegal for AI labs to train their models on copyrighted material without explicit permission from creators. AI will fundamentally transform the future of work. And the last one, we are in an AI bubble fueled by excessive valuations that will lead to near term crash. So, Mike, I intentionally framed these from what I believed would be the highest to the lowest consensus. And the point is that AI moving forward will increasingly be based on people's beliefs, often with very little context. Because as the, the general public becomes more aware of AI, they will form kind of snapshot beliefs based on what they see and experience. So whether true or not, those beliefs will begin to affect how AI is regulated, how it is taught in schools, how it is applied in business. There will be an accelerating, you know, there will be accelerating friction points around its impact on jobs, the economy, educational system, security, geopolitics and society. So the whole point of this, Mike, because again, it's like, I don't know if it's like even something I should bring up, but like, my point is to stress that we all have to do our part to be open minded, to listen to opinions and beliefs of people we trust, to find those people we actually trust, who do their homework and like think this stuff through to form our own educated positions on these things and to do our best to push for balanced and logic based conversations in our companies and our communities. Because, like, we'll touch on this in a little bit, but Dario AMADE was on 60 Minutes last night, right? If your family member or someone in your company has been ignoring AI up until last night and watched Dario Amade, that is a very specific belief system about where AI is going. And Dario presents that with very high conviction. And so other people can be influenced. Then you have people like David Sacks who think, and Yann Lecun who think it's, you know, what they're doing is basically criminal, that, that they are trying to slow everything down for their own benefit. And like, so this is again the whole idea that we have to, we have to understand the belief systems people have, why they have them, and hopefully be open to listening to other perspectives so we can adapt our thoughts over time. Like science, scientific method, like, when new data presents itself, part of science, what makes it so great is we evolve our belief, like our thinking, and it becomes new ways. Like the standard model of physics has survived for how long? Like 100 years or something? Like, and yet they're challenging it every day and they know it has flaws, but like, they can't prove them yet. Like, they can't find out why. And so that's kind of how I feel about AI moving forward is like, we have these basic concepts, sometimes we'll even call them laws, like scaling laws.
34:42
Right.
38:27
But it doesn't mean it is like a law of nature that is always going to be true. It just means right now it's the best we got and it seems to be holding up. And so when we talk about jobs and the impact on economy and the impact on education, it's coming from an educated point of view of like, we've done a lot of homework on this, you look at a lot of data, but the second something comes out, definitively saying this is not what's happening, I will happily move my belief. But yeah, so I don't know, Mike, I know that's a lot to process, but I just felt like it was an important topic to throw out there given how much public attention we now see coming to AI.
38:27
It's critical because you're going to be told one way or another what to believe or be presented with beliefs that are, you know, masked as truths and they're really not like you said. And that's only going to get, it's going to get crazy in the next 12 months, at least in the US political discourse. So, yeah, more important to talk about.
39:04
And I know the next, the first rapid fire topic sort of builds on this. And again, I don't even remember how this all transpired, but part of the next topic is what drove me to say, you know what, I'm just going to talk about this because I really feel like we're hitting that tipping point.
39:24
No, absolutely. And it's a good transition into this topic because next couple topics even are definitely, you know, along these lines because this first topic is really about a few different posts we were tracking that all kind of played off each other about these fears about AI, societal impact, creating kind of unusual political alignment. So this started out when the very Conservative commentator and kind of influencer Matt Walsh on X posted a warning that AI will wipe out at least 25 million jobs and, quote, destroy every creative field. He described the situation as all of us, quote, sleepwalking into a dystopia and criticized leaders for not taking the threat seriously. And he actually followed on to this with the point that the political battle lines have not yet really been drawn around AI. So he argues that politicians don't know if being anti AI is what he would call, quote, right coded or left coded. So these posts, like, the thing that was interesting about them was that they also received immediate agreement from across the aisle. The progressive journalist Ryan Grimm posted that Matt Walsh is right. The liberal podcaster Jon Favreau and centrist commentator Tim Miller also signaled they were on board with his thoughts. And Miller even said, look, if he, Walsh and Grim are all in agreement, which never happens, it seems like a decent place for a politician to stake out some turf. So, Paul, I don't follow all these people, but just based on what I have seen in my research, like, these are people with very strong opinions, very strong beliefs that literally, I don't think have ever agreed on a single issue. So it's very interesting to see how this was breaking down. Like, do you see this becoming like a clear battle line being drawn? It seems like there's not. There's people in different parties that agree that on the same thing right now, which is rare in our society.
39:39
Again, go back to what does everybody believe? It's like, whoa, we're actually like getting both sides of the political aisle to all of a sudden move toward, like, the left end. That's that spectrum. Yeah. So again, like this sequence, though. So if people are new to the podcast, don't know how, we sort of figure out what to talk about. In essence, I. I spend a good amount of time on Twitter in a very filtered way, looking at notifications from a few hundred accounts that I have curated through the last 15 years or so on Twitter on X. We then follow, like, certain media outlets. We look at report research reports. I listen to podcasts, we watch videos. So we are constantly consuming information and trying to, like, piece together the story each week. And so the main topic we just talked about, about beliefs versus Truths, I put on, on our, in our sandbox for the week on the 12th, like the morning of the 12th, I think is when that happened. So then this post from Matt Walsh that you're talking about, Mike, came out 3pm on that same day. Now, I don't. I didn't know who Matt Walsh was. Like, this is. He's not in my circle of influence, is not somebody I follow. It showed up in my feed and I clicked on it and I was like, wow, this dude has 3.9 million followers. Like, this is a pretty legitimate thing. And the post as of now has 5.1 million views. So it's like, okay, I don't actually know who this guy is. I don't know what his past belief systems are. And it actually doesn't matter because the whole point to me was someone who obviously has extreme positions one way or the other, has a whole bunch of other people who also were influencers retweeting and saying, I'm into count me in. We've never agreed on anything. I agree with them now. Like, it's like, hold on a second. So I put it in to our sandbox and I said, I don't follow this guy. I get a sense he's very politically divisive. Like, he's taking an extreme one way or the other. Again, I hadn't studied, like, who he was or what he says. And I said, that being said, he has 5 million followers and he now has an opinion on AI. And I said, more influencers are showing up to the conversation, and that's the story more than this one guy's thought. So that was my original thing. And I put the tweet in our sandbox. And then the next 24 hours happen. I'm like, oh, wait, here's another one, here's another one, here's another one. And I'm just putting all of these in. And then that guy ends up retweeting or he shared something else about. Yeah. So the next morning now, after all these other people have sort of jumped on, he said, AI is going to cause a massive political reshuffling. The sides in the future will be pro human versus anti human, because that's what the AI fight is really about. It will be interesting to see where everyone lands. There will be a lot of surprises, I think. And so again, this goes back to my point about the beliefs versus truths. And all of a sudden, influencers, politicians who haven't made their living in this, have not spent the last decade thinking about this. They're going to come in either because it's coming after creative work, it's coming after jobs, it's coming after religions. Like, it's going to come after things that they care about or that their audience cares about. And so they're going to start to have opinions. Joe Rogan like somebody like that, like, who's just, you start, and you start to move markets, you start to move votes based on this stuff. So it is, it's happening like it is. Again, if you're watching the, with the intensity we are watching what's going on with influencers and politicians, I can promise you there is a change like it is. It fundamentally feels different in the last like 30 to 60 days than it did before that.
41:33
And look, I don't want to hold my breath on this, but you better pray that your influencer of choice has some basic AI literacy because we are about to have some wild conversations if they don't.
45:03
Yeah, and I, again, like, I, I don't know, I mean, if, if AI somehow unifies people who would never agree on anything and it opens their minds to listen to each other. Yeah, great. Like, but I do think that, you know, more strongly than I did a month ago when I was saying it like, I, I think the politicians will try and find the wedge. There was that group we talked about, Mike, I don't know, probably four or five episodes ago by Greg Brockman, where they have $100 million super PAC to fund politicians on either side of the aisle, whichever, whoever is pro AI. So as, as long as you're willing to push for no regulations and accelerate at all cost, you can get money from them. Doesn't matter what political party you're with. The current administration is not a fan of that super PAC, obviously, but it is, it is going to create a very messy 2026 political. Yeah, I think we're at the point where it's like the point of no return is sort of very near here of AI becoming a very big political story.
45:15
I think we talked about this on a past topic a few episodes ago, but I would be shocked if there's not some pretty rigorous, well funded polling in the field right now trying to figure out which position is going to be the lightning rod.
46:20
Yep, agreed.
46:33
All right, so our next rapid fire topic. This week, Anthropic says it has disrupted what it believes is the first large scale cyber attack executed almost entirely by AI. So in a new report, the company details a sophisticated espionage campaign it detected in mid September. The company assesses with high confidence in their words that a Chinese state sponsored group was responsible. The operation targeted roughly 30 global organizations, including tech companies, financial institutions and government agencies. In a small number of cases, it succeeded in compromising those organizations. So how this kind of cyber espionage campaign worked is the attackers used Anthropic's Claude code tool to execute the attack. So they jailbroke the model. They tricked it into bypassing its guardrails, in part by telling it it was working for a legitimate CyberSecurity firm. The AI then performed 80 to 90% of the campaign autonomously. So it conducted reconnaissance, identified vulnerabilities, wrote its own exploit code, and harvested credentials. Anthropic said the AI operated at speeds human hackers could not match, making thousands of requests, often multiple per second. And they say that their own team has used Quad extensively to analyze the incident, arguing that AI is now crucial for both cyber defense and offense. So, Paul, I'm curious, how big a deal is this? Seems like a big first that a lot of people were talking about in the AI community.
46:36
I can't imagine that this is the first time this is happening. And I would assume cybersecurity firms, AI labs, are fully aware of this. I feel like it's probably the first time that an AI lab has directly acknowledged it and shared some details. Now, they. They got lit up online by the people who say this is all about regulatory capture. And of course, they're releasing this stuff and there's not much actionable data. Like, even though they're putting it out there, they're not actually telling people how to prevent these things. So, again, this goes into the. There are multiple sides to this now. And now you've got people, no matter what Anthropic does, they're going to lay into them about being, you know, funded and started by effective altruists. And, like, they just want to control. They're the, you know, the argue. The alternative side is Dario and Anthropic think they're the only people who can safely bring super intelligence into the world. And so they're doing all these things to try and prevent acceleration of, you know, AI. And. And so, like, it's weird because for a long time, Dario was very behind the scenes, wouldn't do interviews, wasn't active on Twitter, wasn't publishing anything. And then in the last, like, 18 months, he's become much more vocal. He's doing the 60 Minutes episode that I mentioned. We'll drop the transcript in the video in the show notes. Yeah, so I. I don't know. I mean, I. This seems also like an inevitability. Like, again, if you asked me, like, three years ago, sort of make some predictions about what is inevitable. This was a given. And it's not because I know something they don't know. It's like they would tell you point blank in interviews. This was going to happen and how it would happen, but people weren't listening yet. And so, so many times when I see these things, it's like, well, yeah, of course this is going to happen. And then I realized, like, most politicians and business leaders haven't been contemplating these things for as long as we have. So, yeah, these inevitabilities just sort of don't surprise me at all when I see them. If anything, I'm just surprised it took someone so long to publish a report like this.
48:09
And not to diminish how important this is, but when you put it that way, it's a little different than someone in Congress, like you mentioned, saying, hey, wake the F up. These are going to kill us all. Because the cyber attack angle, I think too, has a very emotional or like, narrative pull too. It sounds like a movie, right. But it's interesting to see how that became this kind of really emotional hot button issue.
50:08
Right. This will get played up for sure by politicians because it also was, you know, they're claiming it was a Chinese espionage attack. Well, the whole premise of the government's play right now in AI and why we have to accelerate is so we don't lose to China. So, like, anything that builds that story and helps perpetuate that belief system is they're going to run with it again. I'm not saying right or wrong. Like it's very, well, maybe this is exactly what happened. This is a huge risk and a threat and we should be doing something more about it. I'm just presenting the information and saying this is what both sides will say here. So you will see some people who say this study is a sham, basically, and that anthropic is only doing it for regulatory capture so they can control AI and be the ones that usher in super intelligence. And then you're gonna have other people who say, this is a major problem. And we've actually seen it also, and here's our report about it. So it's. You're gonna have. There's, there's multiple sides to the story and they, they can all actually have elements of the truth from them.
50:30
In our next topic, another kind of controversial AI issue in a different domain. So there is, for the first time, an AI generated song and artist has reached number one on a country Billboard chart with a song. This is an AI generated artist named Breaking Rust. That is an AI generated song titled Walk My Walk that recently topped the country digital song sales chart. Billboard, of course, has tons of different charts. This is just one of them. But Billboard did confirm that the music is AI generated. And this song is credited to a human who runs another AI music project, of course. But the AI artist itself has actually quickly accumulated 1.8 million monthly listeners on Spotify, which surpasses several established human artists in this genre. And the song, the AI Generated songs placement at number one actually nudged out a human artist who was pushed to the number two spot for the week. And interestingly, Billboard has reportedly identified at least six AI or AI assisted artists that have charted in recent months. They did note, however, there is increasing difficulty spotting what is AI generated and what is not. So, Paul, this is another just very emotionally charged issue. It's certainly the first time it's happened on this particular country chart. Not the first time AI generated music has gotten play, but like, based on the rankings at least, it does sound like a lot of people are unable to tell or don't really care if music is AI generated as long as they like it.
51:33
Yeah, I mean, I assume that's how this plays out. Again, I get that this is emotionally charged discussion for some people, and I totally empathize with that. I would put this in the camp of inevitabilities. I could have told you are coming, you know, three, four years ago. And in part because even before Chad GPT, what was happening is they were working on predictive models for creating shows and songs and movies. And so I think we even talk about this in our 2022 book, Mike, where the basic prime. So think about this, think about Netflix. If you take all the viewing data on Netflix across different genres, different audiences and. And we. Let's rewind to 2020, like two years before ChatGPT. The premise then was, well, if we can predict what humans will watch, then we can actually construct shows that we know will be hits before we ever release them. So this was happening in the movie industry and in place like Netflix, certainly in the music industry, where they would analyze top things and say, well, what are the commonalities behind them? Put it into machine learning systems and make predictions about what the song should be about, what words should be said, who should sing them. If you're in movies, like which actors or actresses would be, you know, the greatest chance at being a blockbuster. All of this was happening in the teens like it was all about prediction. All generative AI did was then layered in the ability to create the stuff instead of needing humans to create it. So this is just a mashup of traditional machine learning, making predictions about human behaviors and then like, what will they listen to? What will they watch? And Then you're using generative AI to create the thing on demand instead of having to wait for a human artist to do it. So, again, inevitable, not necessarily great for society, but it is a capitalistic society that if people will listen and pay subscriptions to listen to stuff that is not made by humans, then they will allow stuff not made by humans to be hitting the top of the charts and become popular. And they, Hell, they'll probably serve it up in their algorithm. It's like, hey, people don't actually care. Let's serve up whatever they want. This is the future of Facebook, of anything that meta touches. Just give people what they want as long as they stay on the platform long enough. So, yeah, again, this goes to the kind of questions we ask in the Pulse. How do you feel about it?
53:11
And I don't.
55:32
I don't know. Like, I. I actually haven't stopped and thought about that. When I'll have to. When I answer the question this week, I'll actually stop and think about this one. I just look at things as, are they are not. They are there? Are they not going to happen? And what can I. What is. How does it affect me? And how would we talk about it to our audience? And this is one of those ones where I don't know how I feel about it. I kind of hate it. I think, like, try it. But I also feel like it. It'll. What'll happen is they'll probably become places where things are authentically human, whether it's, you know, like kind of Etsy did in the art world. It's going to be stuff like that where it's like, you know what? Give me the social channel where it's human creativity. Like, I don't want the AI stuff. And then you're going to people like, yeah, but AI was used already to do beats and stuff, so is that different? Like, you're just going to get these, like, bickering back and forth. It's like, my hope is that human artists are appreciated even more. Yeah, it gives all of us the ability to create stuff we want to create, and that's fun and. And it's, you know, it's exciting to be able to do those things. I could never make a song before, and I can, like, mess around and make a song about something. Doesn't diminish the value of a human actually doing music. In fact, in my opinion, it makes you appreciate the fact that they can do that without these AI tools makes you appreciate their talents even more. Again, I come from a perspective Of I'm a writer, my wife is an artist, my daughter's an artist. Like, like I. I think deeply about the creative side and so that's my hope is that human creativity actually has like a renaissance and is appreciated even more.
55:33
Yeah, I wonder too how much of the backlash against this stuff, rightly or wrongly so I mean I've certainly empathize is about people finding out that what they thought was a truth is a belief. Right. That's interesting. Comes really tough. I'm not saying what your truth should be, but it is really hard if you have. Whether it's a belief or a truth, that's a deep, deep, deep, deeply held belief or truth that human creativity is superior to machine creativity. Is some of the anger around this, the fact that sometimes you might find that coming into question? I don't know.
57:02
Yeah, that. Yeah, maybe creativity. Yeah. It's true though figure out. And that was again, we addressed that in the 2022 book. I wrote that section about creativity and my whole point was like, it's gonna be. Be creative. Like it's going to be able to create and sometimes better than a human. Like if you gave a blind taste test of like A and B, this song was created by a human or a machine and you don't know and you prefer the machine one. And then also yeah, oh wait, no, no, the human one. And my whole point then was like, but that create. The creativity won't come from any true human experience. It doesn't feel pain, it doesn't know love, it doesn't have emotions, it doesn't have senses. Like it doesn't have all the things that go into human creativity. And so in the end, human creativity means more because it came from someone who has experienced life. Yeah, it's not the AI, it's. It's machines and mathematics making predictions like. Yep. And so yes, the end product can simulate creativity and it can feel creative. But that's why I think like human creativity just remains unique. It comes from a different place.
57:35
Yeah, couldn't agree more on that. Next up, cursor. The AI coding startup has raised $2.3 billion at a $29.3 billion valuation. That new valuation is nearly 12 times what the company was worth in January. This startup was founded by four MIT grads who are still in their mid-20s. It's a popular tool that learns a developer's coding style to help autocomplete, edit and review lines of code. The product has earned a following from engineers and tech CEOs, including Nvidia's Jensen Huang. And this latest funding round, which is the company's third this year, was co led by Accel and CO2. New investors include Google and Nvidia coming on board now. And the tool allows users to toggle between different AI models like those from OpenAI, Anthropic and Google, though the startup pays substantial fees for access to those models. In late October, the company launched its own model called Composer, and they plan to use the new capital for technical research and to invest it in scaling Composer. So Paul, I know they've got their own model now, but this certainly does seem to be a bit of a rebuttal to people that say an AI wrapper startup can't do well. These are some pretty breathtaking numbers for Cursor.
58:40
This is a really big market. I think just connect the dots of why this is relevant to people who aren't encoding. This is what all the labs like. All the labs are using tools like this to augment the code, to write code, to improve code internally. So it's enabling the building of software much faster, changing products much faster, so that the software you use to run your company do your job, they're able to rapidly improve that with fewer developers because they can code so efficiently. It's empowering people. Like we talked about Replit as an example where people like you and me, Mike, who aren't coders are going to be able to build apps, maybe even build companies that we would have never been able to do before because now we can use these tools. So also I listened to an interview with Satya Nadella last week where he was talking about like their coding tools and how they dominated the market and then people like Cursor came along and just made the market so much bigger. So yeah, this is a fast growing marketplace, obviously it's a very fast growing company. But the trickle down is it's going to accelerate the ability to develop software for developers and for non developers. And I think companies like this are going to just keep growing and everybody's going to want to play in this game. All the major software companies and AI labs.
59:57
Our next topic, some other startup news. Former Twitter CEO Paragrawal, he has a new AI startup called Parallel Web Systems and they have just raised $100 million in a series A funding round co led by Kleiner Perkins and Index Ventures which values the two year old company at $740 million. Now why we're talking about this, why it's interesting, is that Parallel aims to build web search infrastructure designed specifically for AI agents So Agrawal stated that AI agents are increasingly becoming the web's primary users and require access to live up to date information to complete tasks for enterprise customers. So they provide APIs that let AI systems search the web. Now, unlike traditional search engines that rank links for human, parallel system returns optimized content or tokens designed to feed directly into an AI model's context window. The company says this improves accuracy, reduces hallucinations, and cuts operational costs. So, Paul, this seems like a pretty big deal and points to the fact the web is likely to, you know, be serving, perhaps primarily moving forward, AI agents, not just humans, at a high level, for sure.
1:01:13
Like, a couple things interest me on this topic. One, you know, it's a notable founder. Two, very notable investors. Three, a very notable $100 million Series A. That is not a common raise at a Series A. That's a pretty significant number. And then the fourth is just this continued need for us to be thinking about what happens when agent to agent becomes the norm on the web. When it's agents visiting your website, not human humans. When it's agents interacting with your chatbot, not humans. Like, I think everyone is starting to try and figure this out. The venture capitals are start. Venture capital firms are starting to make some bets as to, like, what the future of the Internet looks like. And so companies like this are worth paying attention to because it's obviously sort of heading in that direction of trying to solve for the what does the next version of the Internet look like and how does it affect commerce and marketing and sales and everything that we, we, you know, think about all the time.
1:02:30
Yeah, we say often in one way or another, follow the money. Right. So if you see this kind of money going into a space like this, that gives you a decent clue as to where the future is going. Next up, Meta's Yann LeCun is reportedly planning to leave the company to launch his own startup. So LeCun is a Turing Award winner, considered a pioneer of modern AI, who we've talked about quite a bit. He has headed Meta's fundamental AI research lab, known as FAIR, since 2013. And this comes amidst some of the major strategic shifts from CEO Mark Zuckerberg that we've been discussing on past podcast episodes. So Zuckerberg has pivoted away from the longtime research focus of FAIR, instead prioritizing the rapid development of AI products to compete with OpenAI and Google, among others. They had a relatively botched release of Meta's Llama 4 model. LeCun has long argued that the LLMs at the center of Meta strategy cannot reason or plan like humans. His research focused instead on world models that learn from video and spatial data. So he has reportedly in early talks to raise funds for a new venture focused on that type of work. So, Paul, in episode 164, we talked about how Meta's recent shakeups around talent were not favorable to LeCun and he was likely to leave. So I think we can chalk that up as an accurate prediction.
1:03:23
Yeah, again, this. This is pretty obvious. This was the direction this was going to go. He hasn't, to my knowledge, yet, like, officially commented on the fact that he's leaving. However, there was a Wall Street Journal article that the headline was, he's been right about AI for 40 years. Now he thinks everyone is wrong. Which he retweeted. And. And that article said he was leaving, and it also said he did not reply for comment. So I would say that that's a pretty close to confirmation if you're retweeting the article saying you're leaving. Yeah. So he. We've known this for a while. He's not a big fan of large language models. He sees this as a distraction. He's told college students, don't waste your time studying language models like, it's not going to work eventually. And that is very opposite of Meta's belief and direction at the moment. The Wall Street Journal article which we'll link to, had a quote from him from last month at a symposium at mit, where he said, I've not been making friends in various corners of Silicon Valley, including at Meta, which he was still employed by at the time, saying that within three to five years, this world models, not large language models, will be the dominant model for AI architectures. And nobody in their right mind would use large language models of the type that we have today. We touched a little bit on his background. We talked about the show. He won the Turing Award in 2018, the highest prize in computer science, along with Jeff Hinton and Yoshua Bengio, for their foundational work in neural nets, which was sort of what we renamed deep learning around 2010. And so, just like context, a world model is an AI's internal mental model of how the world works and behaves. So sort of think of it as like inside the machine simulation of the world. It helps AI predict what will happen next if something changes or if it takes an action. So just like humans use their understanding of physics and cause and effect to imagine outcomes like, if I drop this glass, it'll break. A world model lets The AI imagine outcomes before they happen. So he's for years talked about the fact that that had to be part of it, especially in robotics like that. It's going to need to be able to anticipate these things and then just total quick side note, I mentioned earlier the Senator Murphy quote about don't f this up from the anthropic thing. Lacune did comment and now Lacune went away. I think he was only on threads for a while. Like he left Twitter and then he just keeps getting sucked back in because I don't know if anybody was on threads anymore. So he replied to Senator Murphy and said, you're being played by people who want regulatory capture, referring to Dario and Anthropic. They are scaring everyone with dubious studies so that open source models are regulated out of existence. Jan is not shy about offering opinions on things. He has very high conviction in his beliefs. He has been proven right time and time again when people doubted him. The question is, is he going to be right this time? And all these labs are spending hundreds of billions of dollars on large language models that he thinks are fools. Aaron, basically. So we will see.
1:04:42
We will see. All right, next up, Google is rolling out some big updates to NotebookLM. So it's actually integrating Deep Research, which is a feature we've talked about plenty of times found in its Gemini model. And so Deep Research acts as a dedicated researcher. It takes a question, it creates a research plan, it browses dozens or hundreds of websites to generate an organized report grounded in sources. And so in NotebookLM, this integration is now going to allow users to add both the final research report being produced and all of the web sources used directly into their notebook. The update also adds support for new file types with Deep Research, including Google Sheets, Microsoft Word, Docs and Images. They're also launching Featured Notebooks, which are collections of high quality sources curated by experts, authors and partners like the publication the Economist. So they're preloaded with content on complex topics, everything from science to advice, and include pre generated features like audio overviews to make the material more accessible. Now, Paul, the feature notebook thing is cool, but really, I mean deep research in NotebookLM feels like a bit of a big deal to me because like the ability to import all of your sources from Deep Research into a notebook helps. It would be super useful for me for my use cases. Like it would help with that verification of the research as well. So I'm really excited about that.
1:07:45
Yeah, I'm a huge fan of Notebook. I don't spend enough time in it. It's one of those where like every once in a while I'm like, oh, I haven't used that in like a week. And I, you know, it's almost like I want to find those use cases to get back in there. But yeah, I mean we, we obviously are big fans of deep research and so to combine the power of the two is awesome. Did have we done a gen app review of NotebookLM yet? Mike?
1:09:10
We have, yeah. But we honestly should just do another one because they've been shipping so many features too.
1:09:29
Yeah. So if people aren't aware. So our AI Academy by SmartRx, which I talked about at the beginning, our AI mastery members, one of the benefits is we drop gen AI app reviews every Friday. So we do a new gen AI app review every Friday. They're like 15, 20 minute reviews. Kind of what it is, what it's capable of doing, whether you should like take a look at it, what the pricing model is, availability, just like the fundamentals of it. But part of the reason we do it in this weekly model where we're always updating it is so when a model or a tool gets updated, we can just do an version two of it and then you can go back and look at the original. So yeah, when these features are changing so often, it's kind of a cool format to be able to drop those. So yeah, if you're an AI Mastery member, you can go in and, and watch the first Notebook LM one that you did that one, Mike, right? Yeah, yeah, maybe we'll have version two of that coming up soon.
1:09:35
All right, our final topic this week we've got some new McKinsey research on the state of AI in 2025. And they just released a new 30 plus page report or so based on some survey data they've done. They found that 88% of organizations report regularly using AI in at least one business function. But nearly 2/3 say their organizations have not yet begun scaling AI across the enterprise. Most are still in the experimentation or piloting phase. This gap is shown up in the bottom line. Only 39% report any EBIT impact at the enterprise level. However, 62% of respondents do say their companies are experimenting with AI agents. And the survey found that there's a small group of AI high performers. This represents about 6% of respondents. And these companies are three point times more likely to aim for transformative change in their companies and are more focused on using AI for growth and innovation, not just cost cutting. Something Paul we've talked about several times. High Performers are also nearly three times as likely to have fundamentally redesigned their workflows to deploy AI. Now, a quick comment on the methodology. The survey was active in June and July 2025. They got responses from almost 2,000 professionals across more than 100 countries. According to McKinsey, the respondents represent, quote, the full range of regions, industries, company sizes, functional specialties and tenures. And 38% say they work for organizations with more than a billion dollars in revenue. So, Paul, it seems like some more useful data to highlight. I'll say the percentage of people experimenting with agents jumped out at me with 62% saying that that might be due to a broad definition of agents or maybe that's what's going on. I'm not sure.
1:10:24
Yeah, I think the big theme here, Mike, that just jumped out to me is what we say all the time. It is early. You likely are not behind your peers and competitors talk to companies all the time who are doing really cool stuff, and they just feel like, you know, everybody else is running past them. And it is usually not the case. The. The number of people that are in the piling phase. I thought that's interesting because we do ask that question in our State of Marketing AI report every year. So we've done it for five years now. And it's, it's similar. It sort of jives with our research. So when we asked that this year, so the 2025 report we had, 40% were at the understanding phase, 46% were at the piling phase, and only 14% were at the scaling phase. So interesting to sort of parallel over to our research, which had about 1800 respondents. And then the one about scaling where they are specific by. And they break it down by size of company.
1:12:07
The.
1:12:58
What they showed was the larger companies are more likely to have reached the scaling phase. So like 5 billion plus at 39% and then down to like, you know, 23% for the 100 to 500 million range. And then on agents, the way they did define them, because I did jump in there just to see how they were defining them. And they said organizations are also beginning to explore opportunities with AI agents. Systems based on foundation models capable of acting in the real world, planning and executing multiple steps in a workflow. So, yeah, and then they said use of agents most often reported by respondents working in technology, media and telecommunications and healthcare. So good, good study. Worth the read. I was, I didn't get through the whole thing before today's episode, but I was kind of bouncing around and trying to look at some of those highlights, but worth a download. And I think we'll probably spend a little more time thinking about that one. Mike, see if there's anything else interesting in there.
1:12:59
Sounds good, Paul. So before we wrap up here, just a quick announcement. If you have not left us a review yet on your podcast platform of choice, we would great appreciate you taking just 30 seconds to let us know how you're enjoying the podcast. Helps us get better, helps us improve, helps us reach more people. So please go ahead and do that. Paul, thanks again for wrapping up another busy week in AI.
1:13:50
Yeah, and one more quick note. We will have a second episode this week, so we'll have an AI answers episode on Thursday the 20th. That'll be from our scaling AI class that I taught last Friday. So Kathy and I will be back on Thursday the 20th for an episode and then Mike, we got to figure out. I'm on vacation next week, so I gotta.
1:14:12
Yeah, I don't know.
1:14:31
We're gonna have a regular weekly episode. Okay, so. Yeah, so states. So definitely Thursday the 20th. If for some reason we don't have an episode on the 25th, it is because it is Thanksgiving week and I am not home. See if we can squeeze one in this Friday, maybe record it, but if not, we will be back after Thanksgiving week. So if I don't talk to y' all before then, you don't hear from us, listen to the podcast before then, have a great holiday, but otherwise, yeah, we'll be back on Thursday. All right, Mike, thanks a lot. We'll go tune in and see if we get a Gemini 3 model this week.
1:14:31
Still, sounds good, Paul. Thanks so much.
1:15:09
Bye guys. Thanks for listening to the Artificial intelligence show. Visit SmarterX AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in person events, taken online AI courses, and earned professional certificates from our AI Academy and engaged in the SmartRx Slack community. Until next time, stay curious and explore AI.
1:15:11