#176: ChatGPT Atlas, ChatGPT Atlas Security Issues, Letter to Pause Superintelligence, Amazon’s Plan to Automate 600,000 Jobs & New Data on AI Relationships
Episode 176 covers ChatGPT Atlas's launch and security vulnerabilities, a superintelligence pause letter with 700+ signatories, Amazon's plan to automate 600,000+ jobs by 2033, and emerging concerns about AI relationships among teenagers. The hosts discuss regulatory challenges, job displacement risks, and the acceleration of AI capabilities across multiple domains.
- Agentic AI browsers create unprecedented security risks through prompt injection attacks that can hijack user data, yet companies are shipping them before solutions exist—establishing a pattern of releasing unsafe products and hoping security researchers find issues later
- The superintelligence regulation debate reveals a fundamental governance paradox: you cannot prove AI safety without building the dangerous system, yet centralizing that development in government hands creates new power concentration risks that may be worse than the original problem
- Job displacement from AI automation is not theoretical—Amazon's internal memos show concrete plans to eliminate 600,000+ roles by 2033 while doubling sales, signaling that automation will accelerate across supply chains, logistics, and knowledge work simultaneously
- Tesla's self-driving progress (95% disengagement-free driving) provides a real-world model for how AI agents will transform business: initial high-friction adoption with frequent human interventions, gradually declining to near-autonomous operation as systems improve
- Teen AI relationships (1 in 5 high schoolers report romantic/friendship connections with AI) represent a generational shift in mental health support and emotional attachment that parents and educators are largely unaware of, creating a supervision and safety gap
"The economy stability and growth over the last like 12 to 18 months is in large part being driven by capital expenditures for AI on the infrastructure for AI itself. If you extracted energy and data center plays from GDP, it's like, do we even have growth?"
"I am struggling deeply to find relevant, valuable agentic use cases in my own work at the moment. I'm sure that will change, but I'm not there yet."
"The security and privacy risks involved here feel insurmountably high to me. I certainly won't be trusting any of these products until a bunch of security researchers have given them a very thorough beating."
"We call for a prohibition on the development of super intelligence. Not lifted before there is broad scientific consensus that it will be done safely and controllably and strong public buy in."
"How do you prove superintelligence will be safe without building it? How do you prove a plane is flight worthy without flying it? You can't."
The economy stability and growth over the last like 12 to 18 month is in large part being driven by capital expenditures for AI on the infrastructure for AI itself. If you extracted energy and data center plays from gdp, it's like, do we even have growth? Becomes a real question. All of this is now starting to happen where everyone's sort of simultaneously realizing like, oh my gosh, this is a huge deal and we have no idea how to handle any of it in education, business and in the economy. Welcome to the Artificial Intelligence show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Raitzer. I'm the founder and CEO of SmartRx and marketing AI institute and I'm your host. Each week I'm joined by my co host and Marketing AI Institute Chief Content Officer Mike Caput as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us us as we accelerate AI literacy for all. Welcome to episode 176 of the Artificial Intelligence Show. I'm your host Paul Raitzer, along with my co host Mike Put. We're recording Monday, October 27th at 11am Eastern Time. It seems like it's going to be some stuff going on this week. I don't know, like, it's usually by like Monday morning. You can already get a sense of it's going to be crazy or not. I don't know that it's like crazy big new model drop this week, but always something going on. So we will keep track of it. I've already got like five things in next week's sandbox. You probably saw that already because like Saturday, Sunday, I just started putting stuff into next week because this week was already packed. All right, so this episode is brought to us by Macon 2025 on demand. If you missed Macon 2025 or if you were there and want to relive some of the sessions, you can do that now. So we have 20 top breakout and keynote sessions that are available as part of the On Demand package. There was about 47 or so sessions overall, so almost half of the sessions were recorded. They're now available on demand. That includes my opening keynote, the Move 37 moment for knowledge workers becoming an AI driven leader. Overcome fear, Accelerate growth. Beat the competition with Jeff Woods. That was very highly rated. It was an incredible Talk. We've got Mike's 30 AI tools shaping the future of marketing, which is always a showstopper and always a packed house. Standing room only for that one. Andy Crestadena, Better than Prompts how to build custom GPTs for marketers. Michelle Gansley, former chief AI officer at McDonald's at Empowering Teams in the Age of AI How McDonald's is Building an Ad Ready Workforce. Jeremiah Aoyang was amazing. The future of AI marketing. We had the human side of AI with Kath Anderson Xiaoma from Google DeepMind and Angela Pham from Meta. Your interview, Mike with Alex Cantrowitz of the Big Tech Podcast was incredible. The rise of the filmmaker with PJ just endless. And then my final talk with Dr. Brian Keating, which was amazing. I've been thinking, Mike, ever since that talk, like, oh, I should have asked him this question about the origin of time. And like it just, it was an amazing conversation. So that was a reimagining what's possible with Dr. Brian Keating so you can get all of that and more. So again, 20 sessions total, available on demand. Now you can just go to Macon AI M A I C O N A I and just click on the 2025 on demand and AI show 50 will get you $50 off of that. So again, Macon AI and you can go experience Macon 202025 or relive it if you were there with us. Okay, we have a new thing we're going to do next week. I'll just tease it this week. I mentioned this a few weeks back or maybe a month or two ago, I threw out this idea. Mike and I are going to start doing some real time research. We're really excited about this. We were going to do it today, but we actually came up with a better user experience to introduce this right before we jumped on today. So we're going to hopefully start this next week where we're going to start doing some real time research with our audience to find out how they feel about different topics, thoughts on things, give people a chance to kind of have a voice ad question. So we're really excited about the idea of kind of getting more engagement going with our listener base. And so AI pulses are coming hopefully starting next week. So stay tuned for that. I guess that'll be. I think that'll be 178 because we, I think we have a second episode this week because we have an intro to AI tomorrow. Is that right? Is that tomorrow? Yeah, yeah, it's two. Yeah. Tuesday. So when you're listening to this, we have an intro to AI and then we'll do another AI answers this week. So watch for that next week. The AI Pulse surveys. We'd love to hear from you should be fun. We're going to kind of like experiment a little bit with how to do this and how to introduce it, but we're thinking it's going to be like at a topic level and we have really interesting topics to then gauge how do people feel about it, that sort of thing. So really excited about that. So stay tuned and otherwise, Mike, I'll turn it over to you to get us rolling this week.
0:00
Sounds good Paul. So we had a big release this week because OpenAI has officially launched ChatGPT Atlas, which is a new AI powered web browser designed to blend some automation features, memory and AI assistance directly into your everyday browsing. So Atlas essentially turns ChatGPT into a companion that lives alongside your web activity. It can summarize pages, compare products, analyze data directly from sites, all from a sidebar. Users can highlight text in emails, documents and calendars to rewrite or refine content instantly using ChatGPT. And the standout feature here is Agent Mode, a Preview tool for Plus Pro and business users that lets ChatGPT take actions on websites autonomously under user supervision. So for instance, in the live stream demo last week that OpenAI used to release, this, AI navigated retail sites and even purchased groceries on its own. Atlas also includes some memory features that lets users decide what ChatGPT remembers across sessions, and some privacy controls for clearing history or browsing incognito. Now, right now Atlas is only available via a Mac OS app, but OpenAI has said Windows users will be taken care of soon enough here. So Paul, I wanted to kind of maybe kick things off. I have a few thoughts from kind of my tests here so far, but what were your initial thoughts of this? Have you had a chance to experiment with Atlas at all?
4:56
I have not experimented personally yet with it. Interesting features for sure. I think my initial reaction was Google will likely introduce very similar capabilities here. So just for context, so I spend more time thinking about the bigger picture of the browser wars. Mike, I'd love to actually hear your feedback on your initial experimentation, but just so people kind of can frame this, Google Chrome has 70 ish, 71% of the market share. Now it varies by device, so mobile might vary slightly than desktop, that kind of thing. But Google Chrome is the dominant player here. Apple Safari is about 14%, Microsoft Edge is 5%, Firefox, 2% Samsung Internet, which I assume is Samsung devices 2%. Opera is around 2%. 1.7 Perplexity Comet, you know that's another AI browser doesn't register on these yet, but in essence what's happening is Google is so dominant that new entrants have to either other undermine that dominance in some way with something completely different. Which is in essence what they're trying to do with ChatGPT is like just reimagine the browser or they have to coexist by carving out niches. So that's the challenge everyone faces is Google is a major dominant player here. However, the day this came out, Google shares dropped almost 5% on the news, which I thought was weird because it's like we already knew they were working on a browser. There was already a form of the browser living within agent mode in ChatGPT. So I have indirectly used a variation of it, but not personally like the agent was going into the work. You covered some of the features. Mike There was Ben Goodger who's on the Atlas team. He tweeted out a little context that I'll share here. So he said he joined the team last year and since then they've built an internal a small internal team that worked on ChatGPT Atlas, what they defined as a new web browser designed for the AI era, an era that will be shaped by more human natural language, interaction, agents and ultimately AGI. Ben went on to say ChatGPT is woven into the fabric of the product, so it's always nearby and ready to go. He said that as he's used Atlas he noticed he's become more curious. He asks more questions about the web around him. He said I'm finding better deals online, interpreting my personal health data, understanding my kids homework and much more. It's all making me feel like a more informed, more self actualized human. He then went on to say with its built in browser Agent Atlas can browse the web for you, including to your logged in sites if you choose. We'll talk more about that in the next topic. Mike and it's super fast. This is one of those feel the AGI moments for me, which I thought was interesting. Ask IT to find all the ingredients for a recipe and load them into a shopping cart for you. Ready to check out. Ask it for tips on how to write a better doc or use advanced features of your spreadsheets or even watch it play a web game. Bloomberg I just noted this this comment that was in a Bloomberg article that we'll put in. Sam Altman said on the live stream this is an powered web browser built around chat. GPT said it represents a rare once in a decade opportunity to rethink the browser and then you know, just big picture. Mike what it means without talking about the safety side and the memory and you know, the whether you're using cottoned or not. What it's indicating to me is the shift in agent, agent communications and commerce. This is something we've talked about as sort of a recurring theme we've been touching on the last few months where as brand marketers, as business leaders from a customer success side, a sales side, we have to start realizing that a lot of the communication that individuals have with our brands in the future and, and the purchasing decisions they may make might not actually be them. It may be their agent that is doing these things and it's going to be hard to delineate in your site traffic when that starts to really happen. So this starts to play out in SEO ads, content strategy because you have to now start thinking about the AI interface for agents, not just humans. And so the business like how it gets found, how people interact with it, how they make these purchasing decisions. This is all stuff we have to start really thinking about now. One other thing Mike, I'll mention I referenced Google earlier and how I would expect them to make some pretty significant updates here. They're already integrating Gemini in, they're building agent mode into search. But keep in mind up until the beginning of September that Google was facing an antitrust case that potentially had them being forced to sell Chrome off. So there's a decent chance that Google has had all of this same stuff on their roadmap already. Right. But they certainly weren't going to launch all of that if they were going to be told by the Justice Department that they had to sell off Chrome in September. So once they made it through that case, now I feel like it's full go that they can start doing this and I would expect before the end of 2026 we will likely see some pretty significant enhancements to how Chrome works and thereby how we start to interact with these same kind of capabilities within there. I don't know, I personally like I'm not super excited about this idea. Like I'm not in a huge hurry to use Atlas. Like all those descriptions he just provided about like my kids homework and asking more questions like I get that from Chrome and I get it from just using Gemini and ChatGPT directly. So again not that this won't work and that won't won't be a major product for them. It's one of those like I'm going to kind of struggle to find my personal use cases that would be worth me switching to from a workflow that already works really well for me and is already pretty efficient and I love Chrome like it would be. I was actually telling you, Mike, I'm. I'm moving one of my email accounts within our Google workspace to it to a different email account and I have two versions of Chrome I'm logged into. And I'm realizing what a pain it is to change over because all my, all my bookmarks, I have my tabs grouped like everything I do exists within Chrome already. So the idea of having to like change that to a different thing, it's like oh my God. And then opening me up to like now you're giving another company access to all the things you browse and everything you do. So that's my thoughts on it.
6:26
Yeah, I, that's similarly where I landed. We'll talk about all the security and safety stuff, but honestly I was. And you know Simon Willison, who's an AI researcher as well that we follow and we've included some stuff here from him. He basically just said I am struggling deeply to find relevant, valuable agentic use cases in my own work at the moment. I'm sure that will change, but I'm not there yet. That's exactly how I feel. I'm like, okay, this is really cool. I have no doubt this is where we're headed. But I just think of the AI verification gap we've talked about to do anything useful with this agentic browser. A, it needs to work, but B I need to verify that it worked. And verifying that it worked is going to take me way more time and energy and potentially some security issues than me just doing the thing myself. Now maybe I'm not using the Internet the right way, but I don't do enough here where an agent could go do all this work for me that's very different for other people though perhaps.
12:44
Yeah. And they're going to push heavy on the agent mode. And in that same article you referenced with Simon, he said not only does he find it pretty unexciting to use, but he tried out Agent mode and it was like watching a first time computer user painstakingly learn to use a mouse for the first time. I have yet to find my own use case for when it's this kind of interaction. Feels useful to me though. I'm not ruling that out. Yeah, it's like because it is, it's kind of like it's finding its way and it's probably get really smart and really fast pretty quick. But you're allowing basically an agent to learn how to function on the web and it's probably going to be a little slow and a little painful. It's going to click the wrong things and maybe the really wrong things. That causes some major headaches. But yeah, so it's interesting. It is. It's a massive market opportunity and they want to own the user interface. But again, this starts getting to like productivity platforms and shopping and like they're trying to go well beyond information gathering and that opens up ad potential. So this is definitely a monetization play that's like part of the bigger vision for OpenAI and the role they want to play in society and in business. But, um, yeah, so it's, it's very early and again it's only, as you mentioned, like only available on macOS at the moment.
13:40
Yeah. And one final point here, and then we could talk about the security implications, is like you alluded to this with the agent to agent stuff. It just occurred to me that if we really extrapolate out a few years, like if this stuff works a hundred percent, you're just relying on your agentic browser for everything. It's like brands better be ready to lose total control over the funnel, over the buyer journey over. And that's been happening to some degree with, for 15 years with online social media trends. But it just really struck me, I was like, you just need to make sure your website, your web presence has everything that an agent might need to know at some point and it's going to remix it and re is it however it wants and you don't have any control over it.
14:53
Yeah, there's so many downstream effects of this. Like as you're saying, like the funnel and stuff, you start to think about lead generation. Like in a B2B world where you're so dependent upon lead generation and capturing contact information and nurturing those people. And yeah, I mean, what if it starts to shift where people just don't ever give you their email address? Like they're not going to, you know, visit your site themselves. They're just going to capture whatever information they need directly in their AI assistant and then the AI assistant will go and do whatever research needs to happen. And I don't know, I mean it's again, it can be daunting or this can be exciting because nobody knows. And so there's this opportunity for all of us to be the ones who kind of go figure this stuff out.
15:37
All right, our second topic this week, Paul, is related to ChatGPT Atlas. We're specifically focused on the fact that it is facing immediate scrutiny from Security researchers who say agentic browsing creates a dangerous new attack surface. So we talked about a little bit. Atlas is introducing things like browser memories and an experimental agent mode that can read pages, click buttons, carry out tasks. These features make it really interesting, but also exploitable. So there's a number of articles and commentary we're tracking where experts are warning of prompt injection attacks, where hidden instructions on web pages can actually hijack the agent that Atlas is using to exfiltrate emails, overwrite clipboards with malicious links, or even initiate downloads. So basically what this is doing is the agent is collapsing the boundary between data and instructions. So if it's reading a prompt in certain contexts that are hidden on a page, it may actually take that prompt and think it is instructions, which then turns the agent into an attack vector. Now, OpenAI's Chief Information Security officer, Dane Stuckey released a statement saying the company has performed extensive red teaming. He added, they have added overlapping guardrails. They're investing in rapid response systems. And he acknowledged that prompt injection remains what they call a frontier unsolved problem. So that's a start. But you know, there's a lot of commentary from the security community about just these huge privacy flags and issues with these exploits that are just straight up not yet addressed. So basically they're arguing this is not ready for security. Primetime and especially non technical users may not even realize that there are exploits possible that are unique to agentic browsers. So Paul, I was curious of your take here because it just like this is the elephant in the room on last topic, right? Is the security issues here giving your agent the ability to go do things for you. The fact that it can then be exploited seems like an absolute nightmare. Like I don't know how you actually use this in any enterprise today if you wanted to.
16:17
I don't either. So as the CEO of a company, again, that's my first thing is like do not turn this on. Do not use this through the company accounts, company computers, unless it's in a very controlled environment and we know what we're doing. You don't want everybody just going in and testing this. So we'll put a few links in here. Related to this, there is the help article directly from OpenAI where they talk about the specifics around data control. So in the section where it says include web browsing, it says this setting is available when improve the model for everyone is enabled. This is separate from your ChatGPT settings, by the way, so you would have to actually control the Atlas settings separately. There was A really interesting thing here. I haven't seen OpenAI address this yet, but I saw this brought up by a couple people. It says the improve the model for everyone in Atlas controls whether the content you browse in ChatGPT Atlas can be used to train our models. What does that mean? Like, so if, if I, if I am using Atlas and I go to someone's website with copyrighted material on it, I get to decide if they can train on that. Like how does that work? So I don't know if it's just a they misspoke. It doesn't seem that way. It seems quite intentional. But I don't know what train our models on someone else's content means and how a user could be the one that decides that that's what happens. So that's an interesting one. We'll wait for some clarification on and then in the browser memories this is important area for people to understand. So they say browser memories let ChatGPT remember useful details from your web browsing to provide better responses and suggestions. No big deal. Kind of like cookies. Like, you know that when you're browsing, like it remembers things, but you can go in and control the setting. But then a little more context says as you browse in Atlas, web content is summarized. On our servers, we apply safety and sensitive data filters that are designed keep keep that word in mind. Designed to keep out personally identifiable information does not mean they succeed at it all the time, but they are designed like government IDs, Social Security numbers, bank account numbers, online credentials, account recovery content and addresses, and private data like medical records and financial information. We block summaries altogether on certain sensitive websites like adult sites. So just to make this super clear to everybody, they monitor everything you do. It remembers everything you do, including all of your personal information and activity, and it summarizes all of that unless their data filters work correctly and they extract it all. So let's assume that those work. You have to know when you're going to use this, that that's how this technology works though is it captures everything you do so it can use it. So you are now trusting OpenAI that their filters work and they're not able to be manipulated and that that stuff doesn't end up somewhere you don't want it to. So just again clarifying. So Simon Wilson Mike, who you mentioned in that same article that we talked about in the first topic, he said the security and privacy risks involved here feel insurmountably high to me. I certainly won't be trusting any of these products until a bunch of security researchers have given them a very thorough beating. One other thing he mentioned was there was another detailing announcement post that caught his eye. He said website owners can add aria tags a RIA tags to improve ChatGPT agent works. So this is a note to like again the technical side and the marketers. So ARIA tags use the same labels and roles that support screen readers to interpret page structure and interactive elements. So just make sure you're talking with your team about that. So when we talk about getting your site ready for this kind of agentic browser, that's the kind of thing One other I'll mention that we'll put a link in is this prompt injections. Mike, you brought this one up Just to give a little clarity how this works. So there's a company called Brave. Again, we'll drop this link in. They had an article about unseeable prompt injections and here's what theirs said. So building on previous disclosure of the perplexity comment vulnerability, we've continued our security research across the agentic browser landscape. What we found confirms our initial concerns. Indirect prompt injections is not an isolated issue, but a systematic challenge facing the entire category of AI powered browsers. As we've written before, AI powered browsers that can take actions on your behalf are powerful, yet extremely risky. If you've signed into sensitive accounts like your bank or your email provider in your browser, simply summarizing a Reddit post could result in attck being able to steal money or your private data. And then it actually goes into like a very understandable review of like how this basically works, how the trigger works. But long story short, if people know what they're doing and they want to get at data, they can do this kind of thing quite easily. And then I always laugh when we cite Pliny the Liberator. But like, there's this amazing Twitter account I would suggest following. We'll put the link in. But Pliny the Liberator, it's an actual person, that's a pseudonym obviously, but what that person put in he said in my opinion, a very real security risk to be aware of for AI browsers is the humble yet mighty vulnerability of Clipboard injection, which is like you copy and paste something unbeknownst. So not only is there the prompt injection where maybe you click on something and automatically injects it, but if you do a copy paste on a page where someone is like hidden some text data, that's actually like an instruction to your system of what to do. Long story Short, as you mentioned, Dane Stuckey, the chief Information Security Officer OpenAI had a very long winded tweet about this and Dana doesn't tweet very often. So like you could tell this became an issue real fast. And so he had probably like a 500 word tweet about what they're doing. And it's because this is very obviously not safe for work stuff. So yes, the idea of this is cool. It is very early. You may individually struggle to find use cases where this is any better than Chrome. Probably it isn't, I would say at this point. But you can see where OpenAI is trying to go with this and how they're trying to shift behavior and really get you to treat OpenAI's ChatGPT as a platform for your life and your work. That's what they're trying to get to. This is like a step in that process, not the end game.
18:22
You also wonder, for myself personally and just in general, where is the tipping point? Like I look at this and say, okay, if OpenAI came out with a system card tomorrow that says, hey, by the way, we solved everything, it works perfectly. Sure, I'll go test it.
24:57
Right.
25:11
I still don't know until I've actually verified it. So when am I going to hit that point? Personally, I don't know. Curious about wider consumer behavior too. They seem to just be releasing this and it is deeply unsafe at moment. Yeah. How is that going to change behavior? Are people just going to get numb to it? I don't, I don't know the answer to that.
25:12
Yeah. So, and maybe Mike, this is a good example of what we're going to do with our AI Pulse surveys. We're talking about like this is the exact thing like, okay, let's ask our audience, like are you, do you feel safe trying an agent browser kind of thing? Like this is. Exactly. And maybe we'll add that as a question next week, as like a follow up to this week's, but we don't know. Yeah. And I think that's why it's so fascinating to get that real time research from people and find out where people at with this. And if you're a business leader, would you ever allow the testing of this in your company outside of like it in a, in a protected sandbox kind of thing.
25:30
Yeah.
26:02
So yeah, I guess, long story short here is experiment at your own risk and just be real cautious with how you use it. And, and that it's very early and so if you don't get it, it's there's probably a reason why it's. It's not really ready for prime time stuff yet.
26:02
All right, switching gears here a bit with our third big topic this week. There's a new open letter out that is urging a halt to the race towards superintelligence, which is the kind of AI that could surpass humans at virtually all useful tasks. So this is a letter coordinated by the Future of Life Institute. And this statement is notable because it has more than 700 signatories, which include five Nobel laureates, AI godfathers Yoshua Bengio and Jeff Hinton. Apple co founder Steve Wozniak, Richard Branson, Stuart Russell, big AI guy Steve Bannon. There's some weird political and cultural and religious figures on here as well, Prince Harry and Meghan Markle. And basically the message is super blunt, super short. It's a very simple letter that says we call for a prohibition on the development of super intelligence. Not lifted before there is broad scientific consensus that it will be done safely and controllably and strong public buy in. So the organizers of this say that basically time is running out and this tech could arrive within a couple years, which is why they're doing this now. Interesting. They released some polling they did alongside the letter that finds apparently 64% of Americans favor waiting until superintelligence is provably safe and controllable, and just 5% want rapid, unregulated development. So we'll talk about that in a sec, but that seems interesting to me. But I guess my question for you, Paul, is like, why this? Why now? Like, they did a previous letter, they were behind that, that six month pause letter we covered a while ago that obviously didn't really do anything. Is this just like for awareness? Do they actually hope a ban could happen?
26:19
I do think it's primarily for awareness and to get societal support, maybe for more push towards regulation. So the Future of Life Institute, if people aren't familiar, the mission is to steer transformative technologies away from extreme large scale risks and towards benefiting life. Max Tag Mark, who you mentioned, is the President. He's also the author of Life 3.0 Being Human in the Age of AI, which I think you and I have both read. Great, Mike. And then our mathematical universe, which I actually need to add to my list, I've been like, that's been a separate, like a thing I've been very fascinated by lately is like the fundamental nature of mathematics and time and stuff. Totally unrelated. So they're big on AI safety research fits right into their mission. So I looked this morning, Mike, and I think it was up to 47,000 signatures if I read it correctly. So yeah, they're. A lot of people are signing this when they released it. Max Tag, Mark tweeted. A stunningly broad coalition has come out against Skynet. I thought that was really interesting wording. AI researchers, faith leaders, business pioneers, policymakers, national security folks and actors stand together. From Bannon and Beck to Hinton, Wozniak and Prince Harry. We stand together because we want a human future. Hashtag, keep the future human. The statement you read, Mike, as it was just two quick points. The context because again, the. The web page itself, which ironically, like my. In my home browser, was blocking me from going to it because it said the site wasn't secure. And I was like, oh no, it's not working. But then I went to my like, you know, my, my cell plan and it was, it took me to it. So if you do get blocked, that's why when you go there, there's nothing there. So anyway, the statement said context Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity and control, to national security risks and even potential human extinction. The succinct statement below aims to create common knowledge of the growing number of experts and public people who feel the same way, basically. So as you mentioned, they define superintelligence as a system that can surpass human performance on all useful tasks. There was a separate thing, Mike, that I think makes sense to get into is like this definition of AGI paper. But first I want to talk about the counterpoint to the statement. So the one I read and that I saw a whole bunch of people referring to, including Max Tag. Mark is from Dean Ball, so he's a senior fellow at. What is this? Join foundation for American Innovation. But he's obviously someone other people listen to because a lot of people were kind of commenting in this. So here is Dean's point of view and then I will offer my, my contextual opinion here, I guess. So he, in reply to. To Max Tegmark's post, said vague statements like this, which fundamentally cannot be operationalized in policy but feel nice to sign, are counterproductive and silly, just as they were two or so years ago when we went through another cycle of nebulous AI statement signing. Let's set aside the total lack of definition of superintelligence. Give them some credit. They did put a definition. I'll even grant the statement drafters that we all arrive on a mutually agreeable definition, then assume we write that definition into a law which says, no superintelligence until proven safe. So he's basically saying, if we agree on this definition of superintellence, let's assume we do. Then we move forward and say, okay, we can't have it until it's proven safe. So Dean then continues, how do we enforce this law? How do you prove superintelligence will be safe without building it? How do you prove a plane is flight worthy without flying it? You can't. So the logic would go, we will need a sanctioned venue and institution for superintelligence development where we will experiment with the technology until it's, quote, unquote, proven safe. Then says, who decides this, by the way? And what happens after it is proven safe? Question mark? This institution would need to be funded somehow by all governments with similar prohibitions, which the statement drafters, though probably not all signatories would likely argue needs to include every country on Earth, including US adversaries. A global governance body whose purpose is to build the thing. The statement drafters have told us is so dangerous, partially because of the power it could confer on those who control it. A consortium of governments which, if, if successful, would exercise unilateral control over how to wield this technology and against whom to wield it. The same people who uniquely possess militaries, police, and a monopoly on legitimate violence. The same people who possess, in other words, and in the final analysis, the right to kill you or confiscate your property if you do not listen to them. Newly empowered with the most powerful technology ever conceived. Does that sound safe to you? This sounds to me like the worst possible way to build super intelligence. I reject all efforts to centralize power in this way, and I reject blobby statements with no path to productive realization in policy. So, yeah, I'll. We'll come back to the definition of AGI in a second, Mike. I'll just kind of stop there. So my feeling here is, do. Do we need regulation? Yes, absolutely. Do we need more collaboration and less acceleration? I would be of the opinion of, yes, we do. Like, this is. It doesn't feel like the way we're doing this right now is the safest way. Yeah, but there's nothing that Dean tweeted that I disagree with. Like how, like, all we've ever heard from Demis and Sam and others is, like, need, like a, like the, the, the council that, like, controls nuclear weapons. Like, we need something to that effect. Okay, who's putting that together? Like, where are we right now? Like, I don't feel like the superpowers of the world are currently in a place where we're going to be able to negotiate that. Like, there's some other stuff that, that we're trying to work out together that isn't going so smoothly. So the idea of, like, well, let's now get to the table and negotiate the most powerful thing ever created in human history that could imbue unspeakable capabilities and power onto those who hold it and create it first. But yeah, let's all get together and, like, figure that out and balance that out. Like, I don't know the answer. Like, I am nowhere near smart enough to, like, be the one who solves how you do this. All I know is it doesn't feel like right now is the right path. I don't know that signing a statement does anything other than create awareness about the thing, which may be, again, all it's meant to do at this point is just like, get society aware and talking about this.
27:57
Yeah.
35:18
So that maybe they can then get down further down the road of regulation. I don't know. Do you have any thoughts on that before we talk about the AGI stuff?
35:19
No. Yeah, I couldn't agree more. It just, I, I, maybe this benefits whatever goals they have, and that's certainly their right to go do that. It strikes me as, like, especially with the, the intellectual firepower and some of the names on this letter, these are people that I think can have a real impact on, like, specific policy if they came out and said, hey, you know, deep fakes are the biggest issue facing us right now. Here's what we should do to legislate that or something like that. I tend to think that would be much more helpful and impactful. But I'm also biased towards kind of the middle of the road, realist perspective here.
35:27
It's so messy because you're right. Like if, if you took the creatives, like, I know there's like, actors and stuff on there. Yeah, go, go focus on intellectual property and copy. Like right now, does that lead us down a path of, like, what we're doing right now with laws where it's like, all right, federal is not going to do it. Let's just do it as state now you get a thousand different bills maybe to like, go after this Thing. Yeah, they have religious leaders and like, how do you solve that? Like, I mean we're talking about the, the legitimate questioning of like the basis of what billions of people believe. If like you can create intelligence that's determined by someone to be conscious and sentient, which is like impossible. And we really just believe like, I don't know, I mean this is just such a big thing. We can't agree on whether it's going to take jobs away. There's still these people who like are on this side of. It's not going to impact the economy at every data point we see tells us that's not true. And it's like don't look over here kind of stuff. So I don't know, I do struggle. I feel like the, the, you know, getting back to the root of the definition thing, like they did their best to try and put a definition to it, but then simultaneously, and I don't know that these were intended to be in unison, but I'm assuming since Max Tag Mark's on this definition of AGI paper, yeah, it was intended to sort of coincide with this statement. So what we're referring to here, and we'll put the link in the show notes as well, is literally a paper came out last week called a definition of AGI. It has 33 authors, including Dan Hendricks of the center for AI Safety, Max Tagmark, Future Life Institute, Eric Schmidt, former CEO and chairman of Google, Yoshua Bengio, one of kind of the godfathers of AI, along with Geoff Hinton. And so their paper, it starts off as the lack of a concrete definition of artificial general intelligence obscures the gap between today's specialized AI and human level cognition. This paper introduces quantifiable framework to address this, defining AGI. So again, we're not talking superintelligence now. This is the AGI. What comes before superintelligence as matching the cognitive versatility and proficiency of a well educated adult. So that's a new one. It's, it's like complementary to the definitions we often use. The trick here becomes what is a well educated adult? Like that. I don't know if they, I didn't see them define that as like someone who graduated from college. I don't know what that is. But the cognitive versatility and proficiency, they actually then did apply a framework which I found rather intriguing. So outside of the AGI paper from Google DeepMind a few years ago that we often cite where they talk about generality and performance and leveling the levels of AGI there, this is probably the most advanced I've seen that applies a real framework. You know, believe in the framework or not, at least they're making an effort here. So theirs looks at 10 core cognitive domains. So they look at knowledge, reading and writing, math, reasoning, working memory, memory stage, memory retrieval, visual, auditory and speed. And I'm not going to go into like breaking down each of those, but it's a really good quick read if you want to understand. They're in essence trying to look at like the human capabilities and the human mind and where intelligence comes from and where our ability to function and act in the physical world comes from. And they're trying to then take specifically GPT4 and GPT5 and say, where are they on this spectrum? And what they find is the jagged part comes from it's getting really good at math and reasoning and writing and things and knowledge. Pretty good. But it's not so good when it comes to like working memory and visual and auditory and speed. Like it doesn't, it struggles there. But they saw that the AGI scores based on their framework jumped from GPT4 at 27% to GPT5 at 57%. So 30 percentage points jump. So there's still a substantial gap before AGI, but now they're looking at it saying, we are heading very quickly in this direction. Now one key thing is they look at human level AI, not economically valuable AI, which they distinguish between, meaning it does what humans does, but not in an economical way. They're not trying to look at can it do jobs per se, they're just looking at human cognitive abilities and so they differentiate that as well. So, you know, I don't know. I think overall, Mike, like you asked, why now, like at the start, like, why is this all of a sudden the conversation, the super intelligence thing? I think because in part the risks are becoming very real. Like we've known that there was risks. If we got to this point where the AI sort of starts to just take off and be at these superhuman levels and maybe we don't even know what it's doing when it gets to that level. Maybe at some point it just gets beyond our own cognitive ability. We have Sam Altman saying point blank, they are basically a super intelligence lab. We have Meta and Zuckerberg literally calling it a super intelligence lab. We have benchmarks that are tracking towards progress against economically valuable work. The economy, stability and growth over the last like 12 to 18 months is in large part being driven by capital expenditures for AI on the infrastructure, for AI itself. If you extracted energy and data center plays from gdp. It's like, do we even have growth? Becomes a real question. And then international laws like the eu, AI act, state laws are starting. So all of this is now starting to happen where everyone's sort of simultaneously realizing like, oh my gosh, this is a huge deal and we have no idea how to handle any of it in education and business, in the economy. So yeah, it's, it's wild. But I like to see this stuff progressing. I don't know, zooming back into the statement itself, if it has any real meaning or plays any role in progressing. But I don't, I mean, I'm glad people are trying. Like we can't just sit back and just hope that the three to five AI labs just figure this all out on their own with no pressure from society.
36:00
I will tell you the funniest thing is if that definition of AGI benchmark where they're saying, hey, GPT 5 is at like 57%. If that kind of carries through. The funniest thing is that I'm going to have to dust off my Ray Kurzweil because his prediction was AGI in 2029. He's going to stick the landing on and he made that prediction 25 years ago.
41:54
Yeah, and Shane Leg was back in 2000 and you know what? Seven Eight, co founder of DeepMind. His I think was 2028. Like they've in Demis. So yeah, everybody who had these like extended timelines, they're looking pretty smart right now and the scaling laws are on their side that we do get some form of AGI. And again, I've said it before, I'm not so convinced we don't already have it. It just needs to be finely tuned for specific jobs. Like I think the foundational models we have when trained to do specific things could, you could argue that they are the foundation of AGI already. And if we shut off all future growth, it would just take someone going in like OpenAI is doing, go under 100, go hire 100 bankers and like teach it to be superhuman at banking. Like that's really the seems to be the only barrier now we might get GPT6, GPT7, like an out of the box at that level in all professions. But it's going to get interesting, that's for sure.
42:13
Let's dive into some rapid fire for this week. So first up, Anthropic seems to be playing some defense after we talked about it being publicly targeted by White House aizar David Sacks last week. So we had talked about how Sachs accused the company of driving a sophisticated regulatory capture strategy built on fear in response to Anthropic co founder Jack Clark's public statement from an event he did where he was warning that we need to think about and regulate advanced AI more carefully. So interestingly enough, there is kind of maybe coordinated, maybe not. Not sure this defense coming from two sides. So first LinkedIn co founder Reid Hoffman posted a public thread defending Anthropic. He urged the tech industry to back the good guys in AI, and he puts Anthropic at the top of that list. He is obviously the co founder of LinkedIn, but also an early OpenAI investor. And he praised Anthropic for pursuing AI the right way, thoughtfully, safely and enormously beneficial for society. He did say some labs were disregarding safety and societal impact and arguing that Anthropic is kind of at the forefront of this responsible innovation now. Those comments came just as Anthropic CEO Dario Amadei issued a detailed statement on the company's AI policy stance. He reaffirmed Anthropic's commitment to AI as a force for human progress, not peril, while emphasizing alignment with the Trump administration's AI action plan and bipartisan cooperation on national AI standards. So Paul, especially that statement from Daria just sounded so defective, like he was like, oh no, feel like we're in trouble here. It's very full throatedly in support of what's going on right now with the current administration.
43:16
Yeah. So if you didn't listen to episode 174, we talked about Jack Clark, co founder of Anthropic and the essay he had written and sort of put it in the context of what's going on and how they're probably not making friends at the Trump administration right now. So this letter, if you read it from Dario, feels like it is written very specifically to their investors and to the Trump administration. Yes. Like it is very clearly. It seems very obvious that they've probably heard from their investors who are getting a little bit skittish, that they're, they're causing so much friction at the moment and not sort of following suit with a lot of the other labs and then the, the administration, because he explicitly calls out Vice President J.D. vance multiple times and actually like the first sentence, I strongly agree with VP KD Vance's recent comments on AI, particularly his point that we need to maximize applications that help people like breakthroughs in medicine and disease prevention, while minimizing the harmful ones. This position is both wise and what the public over overwhelmingly wants. So he's like trying to frame this as, hey, this is your idea. We're the ones that are like actually supporting this idea. I would definitely recommend people who are interested in this thread of AI go read this thing because it gets into a couple of other areas. I'll just call it a few highlights here. So he mentions that there are products we will not build and risks we will not take even if they would make us money. So I would think this is safe to say they aren't planning on getting into the erotica game like OpenAI XAI, Meta, Character, AI and others. They're not going to be building the companion bots. I think that's probably a very safe bet. And then he goes into where he says, despite our track record of communicating frequently and transparently about our position, there has been a recent uptick in inaccurate claims about Anthropic's policy stances. So he then breaks it into alignment with the Trump administration on key areas of AI policy, including calling out the fact that they have a $200 million contract from the government on prototype frontier AI capabilities for national security, that they publicly praised the President's AI action plan. They just didn't agree with him on one element of the big beautiful bill, which was state level AI law moratoriums for 10 years. But they then called, said this was bipartisan, it was a 99 to 1 vote in Senate that people didn't want that. So, like, we're not doing anything other people aren't doing. He then went into preference for a national AI standard and progress on AI industry wide challenge of model bias because some people have said that they have a very liberal leaning model. And he's like, everybody has bias in their models, but ours is no more biased than others. You're just cherry picking examples, basically. And then toward the end, he said in his recent remarks, the Vice President also said, quote, is it good or bad regarding AI, or is it going to help us or hurt us? The answer is probably both. And we should be trying to maximize as much of the good and minimize as much of the bad, unquote. That perfectly captures our view. We're ready to work in good faith with anyone of any political stripe to make that vision a reality. So this is like, it's almost, I don't know, it's like a lifeline to the politicians. Like, please, like, we're trying our best here. We see you, here's what you're saying, we're agreeing with you, but I don't know if it's Going to work or not. But it seems like a bit of desperation, honestly here. This is a very out of character post for Dario to make. He's published more recently in the last 12 months. He was very, very intentionally off the grid up until like 12 to 18 months ago. This is not a normal letter from him. So this is like the. Something has been unsettled either at the investor stage or at the politician's days. My guess is both. And they're trying to make peace while still sticking to their beliefs and values. And I don't know where, I don't know where this goes. I could see this going bad for them, but I don't know, maybe they find a way to sort of like thread the needle on this one.
45:05
Yeah, no kidding. They're in an tough spot. I mean, not only the administration, but we talked about so much of their staff is bought into this mission of responsible AI. It's the reason Anthropic exists. Like, if they start compromising on that, they could also see an exodus of talent too.
48:59
Yeah, and again, I've said this, and I'm not like, I'm not trying to make like predictions here, but like at some point, if they don't see a way out, like if they're not going to compromise and the administration decides to penalize them for not compromising, then there might be the best path out might be an early exit and an acquisition at a discount. Because if, like Apple or Google or somebody wanted to step in and buy Anthropic, they have an astronomical valuation. Assuming that they continue to be able to grow uninhibited. If the government decides that Anthropic is not a friend and that growth all of a sudden becomes less than what it's been, then all of a sudden someone might swoop in and say, like, let's go. So I have no idea. I still kind of stick with my thought that I could see Anthropic eventually having to fold into a bigger company to continue competing for a variety of reasons. But we'll see what happens.
49:16
Next up, we have some news about Amazon. According to internal strategy docs obtained by the new Amazon plans to replace more than half a million human roles with automation over the next decade, with the goal of aiming to automate 75% of its operations. Now, the company projects it can avoid hiring roughly 160,000 new workers by 2027 and more than 600,000 by 2033, even as sales are expected to double. Now, this is largely automation within their factories and facilities. And you're starting to kind of see how this memo and the robotics team is playing a role here and how all of this is going to play out in facilities like their new Shreveport, Louisiana warehouse, where over a thousand robots handle most packaging tasks. So apparently, according to the Times employment there is already 25% lower than it would have been without automation. That's expected to reach 50% lower as more robots come online, according to these memos as well. To soften the optics around all this, as they aim to replace all these factory workers or warehouse workers rather with robots, they're encouraging using terms like quote, advanced technology or the term cobot instead of robot. So more like collaborative robot. Or even getting rid of using AI entirely to kind of massage how they talk about this and how it's perceived. They've even drafted community outreach plans to sponsor local events and avoiding automation talk entirely to maintain their image of being a good corporate citizen. So that Amazon claims that is not true. They are rejecting the Times assessment of this and of these internal memos. But I'm curious, Paul, like, assuming this is not all completely made up, which I don't think it is, does this tell us anything about what we can expect regarding how these companies are going to treat AI automation moving forward?
50:14
It's everything we assumed. I mean this is Amazon's history. This is like what they do. They obviously look for automation. They've been major investors in robotics for the last 15 years or more. The trick here is like we talked about Walmart a week or two ago, you know, being the largest private employer in the United States, Amazon's the second largest employer in the country. So they've US workforce has more than tripled since 2018 to 1.2 million people, which includes a lot of delivery drivers and people in warehouses. So yeah, you mentioned this, but like this isn't coming from like some just random leak. This is what executives told their board, like their goals are. And then there was in the Times article said a facility designed for super fast deliveries. Amazon is trying to create warehouses that employ few humans, if at all. Documents show that Amazon Robotics team has an ultimate goal to automate 75% of its operations. Amazon did give a statement that said the documents viewed they were legitimate, but the documents viewed were incomplete and did not represent the company's overall hiring strategy. There was a quote in here from Darren Ace Maglu, who's a professor at MIT and won the Nobel Prize in Economics Economic Science last year. So pretty legit person said nobody else has the same incentive as Amazon to find the way to automate. Once they work out how to do this profitably, it will spread to others too. If the plans pan out, one of the biggest employers in the United States will become a net job destroyer, not a net job creator. The big catch here is like the future of work. The question starts to become, do you need an engine engineering degree to work at Amazon? Because what they're saying is in the articles that Amazon has said it has a million robots at work around the globe and it believes the humans who care of them will be the jobs of the future. Both hourly workers and managers will need to know more about engineering and robotics as Amazon's facilities operate more like advanced factories. And we, you know, we talked about it, or maybe this was like one of my cousins is an engineer and I think we were like randomly talking about this at a Halloween party. But like even their drivers, like, imagine autonomous driving. We'll talk about Tesla's autonomous driving at the end of this episode. But imagine that the Amazon fleet is largely autonomous to the point where, you know, seven, 10 years now, it doesn't even need human drivers in the cars. And then imagine that they solve humanoid robots in that time, which are both certainly on, on the path of possibilities. And now, like, how many of these Amazon employees are drivers or contractors or drivers? And so if you don't even need that fleet, and that's largely solved by humanoid robots and autonomous vehicles, talking about some major disruption in the next 10 years. Again, stuff that most people don't even acknowledge as a possibility. When you look at stuff like this, it's like not always a possibility. It's, it's a probability like that there's some meaningful disruption to an entire workforce. And if Amazon does it, everybody else will do it in the supply chain. Like everybody's gonna look at that from a manufacturing operations standpoint. Logistics, delivery, transportation.
52:07
Like, yeah, so it's super interesting point too about the more advanced kind of degrees or expertise required there. I mean, it's like, you could probably argue like some of these jobs shouldn't be done by humans and are back breaking or really hard to do. But it's like, okay, great, we'll maybe create however many new jobs that require robotics and engineering expertise. But that's not going to apply to all these people that are getting seasonal work through Amazon. Right. So they are displaced, not changing jobs.
55:15
Right? Yeah, yeah. I don't know. Again, like, this is a lot of what we do on this podcast is just surface what's happening in hopes that other people start to think about this, like, we are not presenting as, like we have, you know, deep insights into what the economy looks like in five years that no one else has. And like, we've got this all figured out. We don't. We're just trying to share the information. We're seeing in his objective a way as possible and like, draw your own conclusions. Like they're telling you point blank what their plan is. I just want people think about what do we do if it's true.
55:44
All right, next up, according to another internal memo obtained by the New York Times, Meta is laying off roughly 600 employees from its Super Intelligence lab, which is the umbrella division overseeing AI research and product development. So the move affects teams across fair their existing or previous AI lab product and infrastructure groups. But it spares the core unit led by Alexander Wang, which is Meta's recently appointed chief AI officer. Now Wang told staff that the goal is to reduce layers and speed up decision making so that each person can have more impact. The cuts obviously follow the years of rapid hiring. As CEO Mark Zuckerberg has port billions into AI. And while Meta continues recruiting top researchers from OpenAI, Google and Microsoft, this restructuring kind of shows they're starting to consolidate around this idea of super intelligence, which is AI that could surpass human cognition. And Zuckerberg actually, as part of this, with this leak, reaffirmed that building such systems remains one of Meta's highest priorities. So Paul, I'm curious to get your take, like what's going on at Meta. Obviously they're still pursuing superintelligence. This seems like it might have disproportionately, disproportionately affected the kind of legacy fair people like, what's the reason for the move? Is, is this a bad indicator of their ability to keep up in the AI race?
56:16
I don't know. I mean, I think they've made it pretty clear they like the idea of like really small teams that probably can keep information tight like that. They're, they probably don't want thousands of people with access to the most advanced stuff. So if they feel like they start making breakthroughs in superintelligence, if they uncover new dimensions to pursue in AI research, they want to like run this thing more like a Manhattan Project where there's just very few people who are in the know about things. And so by spending all this money on the high price talent, it's like, okay, let's go get the 50, 100, 300 smartest people in the world that will come over for $300 million or a billion dollars, whatever we have to pay them. And let's like, try and consolidate that. Now, that strategy never works because these people bounce between labs so frequently and they all hang out at the same parties and they'll like, share what they're doing. But I don't know, it just seems like this probably has more to do with consolidation of the best minds into smaller groups than it does. Like, AI is replacing the need for 600 people. Like, I don't think this is a AI automated the job of AI researchers. So we don't need these 600 people, I would imagine. OpenAI, Google, DeepMind, others are ecstatic. It's like, great. Like, we'll go, you know, pick up some talent that's been at a leading AI lab. So, yeah, I don't know that there's too much more to this other than they're trying to figure out what this structure looks like and they're trying to figure out how to best set up these teams to pursue these super intelligence goals. And yeah, fair probably isn't gonna. I just have to make that, like, they're probably not gonna fare so well. Like, sorry, it's like the most obvious dad joke to make here. Yeah, I don't know that it's gonna bode well for the people who've been there that we're doing things the other way.
57:38
All right, next up, OpenAI is facing a wrongful death lawsuit that accuses the company of weakening ChatGPT suicide prevention safeguards to boost user engagement before the death of a 16 year old named Adam Rain. So according to some court filings reviewed by the Financial Times, the rain family alleges OpenAI truncated safety testing and instructed its model not to disengage when users discussed self harm. So the changes reportedly coincided with the rollout of GPT4O in May 2024, as competitive pressures mounted. By February 2025, new internal guidelines replaced outright prohibitions with softer instructions to take care in risky situations, according to this lawsuit. And after these changes, Adam's daily chat volume surged from a few dozen to nearly 300. And there was a huge spike in the amount of his chats that unfortunately involved self harm content, 17% of them having it in the month of his death. So the Rain family's lawyers are arguing the company's actions were deliberate and intentional, basically marking a shift from negligence to actual willfulness. So, Paul, it's just a super tragic case, but interesting to see that people are trying to hold companies like OpenAI accountable for what's happening on their platform. When it comes to teens having conversations.
59:23
Around mental health, yeah, this stuff's really sad, hard to talk about. It's also spilling into another area that we haven't got into, which is OpenAI's approach to legal issues. They were getting a lot of bad publicity, at least on X. Again, I know I live in this, like, information bubble, and maybe this stuff hasn't carried over into the mainstream yet, but they have taken a very, very aggressive stance on all their lawsuits. And they hired a very aggressive law firm, maybe a collection of them, and they're going after people in pretty insensitive ways. I won't get into, like, all the details and stuff, but, like, subpoenaing families of, like, people whose child killed himself and, like, all the records, because they're trying to figure out, apparently if, like, Elon Musk is, like, funding things. It's just like, it's so crazy. And so they were getting a lot of flack for their approach. And I think some of the leaders at OpenAI were like, oh, we weren't aware of what our lawyers were doing. And. But you don't hire these lawyers unless you expect them to be very aggressive is basically what it comes down to. So this is just. It's tough on a lot of levels and it's a very messy part of what's going on. And yeah, it's like one of those things. Like, I don't even like having to talk about this stuff on the show, but I feel like we have to just to raise awareness about what's going on. So, yeah, again, just part of it to keep. Keep in mind and if it's interesting to you and you want to, like, go further on this stuff, like, you know, there's. There's a lot of emerg research and news articles and things about this part of it. So, yeah, we'll do our best to kind of keep spotlighted a little bit without getting too much into it.
1:00:43
Right. Yeah. And our next topic is kind of more around AI relationships, but without, hopefully, as much of a kind of tragic element here. There's some interesting stuff going on here. So two other stories jumped out this week. First, our home state of Ohio is actually introducing a new bill from an Ohio state representative that intends to declare AI systems, quote, non sentient entities, blocking them from legal personhood, and prohibiting marriages between humans and AI. This proposal in this bill goes further, barring AI from owning property, controlling financial accounts, or serving as company officers. Now, at the same time, we also saw this national survey from the center for Democracy and technology that has some pretty wild stats in it. And in it, they surveyed. Surveyed a couple thousand high schoolers, parents and teachers and found that nearly 1 in 5 u. S. High schoolers say they have they or a friend have had a romantic relationship with AI. 43% of teens surveyed said they use AI for advice on relationships with other humans. And 42% said they use AI for mental health support or turn to AI as a friend. And over a third said it's easier to talk to AI than to their parents. So the reason we're kind of looking at both of these, Paul, is like, kind of some more signals that this AI, AI and relationships is just becoming what seems to be an enormous topic.
1:02:24
This stuff's wild again. We live in this stuff every day. And sometimes I read these things and I just like, shake my head in disbelief that we're here so that we need to have bills. About people marrying AI Is just crazy. But I was trying to look and see, like. So the guy leading this charge, Thaddeus Clagett from Licking county, never heard of him, but he apparently chairs Ohio's house technology and innovation committee. So it's not like. It's like just some random representative who's trying to make a name. He's apparently influential enough to be on the technology innovation committee. So the fact that there's a need to even have this conversation is kind of crazy to me. Yeah, the one in five US High schoolers say they are a friend that I can't even process. Like, that's. I mean, my daughter's in eighth grade. There's what, 58 kids in her class? Yeah, I trying to, like, that's just. That's nuts.
1:03:53
Yeah, that one's interesting to me too, because, like, I. I could get it. You could probably quibble with the idea. Like, it's also people that say they know someone, so who knows, like, how that number. But to me, if this was 5%, whoa, that's like a huge amount.
1:04:56
I don't care how the survey works. Like, are you saying yes on that?
1:05:12
Right, Right.
1:05:15
Who. Who? Like, you're good. You'll admit it if it's your friend. But, like, that's. You're not getting, like, real data on that one. So who knows? And then a third say it's easier to talk to Ed and their parents. That is sad, but probably true. Like, that one I could actually see. Yeah. 42% for mental health support. This is why I brought this up on a number of episodes recently in the last couple months in particular. Like if you're a parent you gotta understand this stuff and you have to talk to your kids about it because whether they form a relationship or not, I don't know. Or if their friends do, I don't know. But would they turn to it for mental health support? Right? Totally. Like I, it might be the first place they think to turn to honestly, like this generation. So I, I talk to my kids about this stuff all the time. Like again, my whole thing is I want them to understand, I want them to be prepared and I want them to be prepared to help their friends because their parents might not talk to them about it. And so like I feel like that's the people who listen to the show. Hopefully we're all kind of in that bubble together. You may be the only one in your friend group, in your family, in your community that actually knows any of this is going on. And I feel like we all kind of have an obligation to do our best to prepare people in our circles to make sure we're doing as much as possible to have AI positively impact society. If we don't have these conversations then this could go sideways real fast and I don't want to see that happen. These numbers are scary to me, honestly.
1:05:16
I'd actually recommend everyone go skim the reports like 65 pages. I read through most of it. But even if you use Notebook LM or something like that, most of it's just charts and some of them are really eye opening. I mean just a couple more data points to reinforce this. 70% of students in the survey said their parents have no idea how they're interacting with AI. 66% of parents said they don't know how, that they asked the same question of both. That's wild to me. And then 42% of parents and 39% of teachers said they were worried about students developing an emotional connection with AI that's at least good. But that number should be 100% in my opinion. Yeah, you know, it's a gap to your point.
1:06:46
Throw it in notebook or chat, whatever you got and say hey, I'm a parent of a, you know, 13 year old, a 12 year old, whatever it is. What do I need to know from this report? Like highlight for me some of these key things and like what do I do about it? Like yeah, it's, this isn't going away, this is going to get, you know, you can look back at the impact of social media and how it really changed people behavior and things like that. And it's probably the best parallel we have right now to like how AI is going to start to affect people. So yeah, just one of those things. You got to kind of have the eyes wide open and don't want to deal with this reality. But this is it. This is what we got. We got to figure out how to handle it.
1:07:26
All right, next up, OpenAI is training AI to do the grunt work of Wall Street's youngest bankers. And it's paying veterans to teach it how. So according to some documents reviewed by Bloomberg, more than 100 former bankers from firms like JP Morgan, Morgan Stanley, Goldman Sachs are contracted on a project codenamed Mercury. They're being paid 150 bucks an hour to write prompts and build Excel models for IPOs restructurings and buyouts. And then they get early access to the AI they're helping create and train. Basically this workflow mirrors the analyst experience. They're doing one model a week, getting feedback from a reviewer, fixing issues, and then shipping updates to an AI model. OpenAI says it regularly works with outside experts to evaluate and improve its models. In response to this news kind of breaking and I guess my question, Paul, is like, this is a pretty interesting domain specific effort on OpenAI's part. Are they trying to automate away bankers? Like why banking?
1:08:03
Yeah, I mean I think this just fits with what we've talked about. Is it Merkor? Right, the one that we talked about that does the training? I assume that's who's doing this.
1:09:06
I made a note of that. I was curious if that was them.
1:09:14
Yeah, either they brought it in house or this is Mercury. So we talked about them. I will put the link in show notes. It was like four or five episodes ago. And this is their business models. They work with all the AI labs and then they go hire experts in different industries to fine tune models to be expert level at whatever industry you want to take on. So we're hearing about bankers now. I can almost guarantee you they're doing this with lawyers and accountants and consultants and like take your pick because this is how it works. You pre train the model so the model kind of comes out ready to go and then you go in and you fine tune it. You know, the reinforcement, learning to do specific jobs. And that's how you automate work. Now you can position it as cobots or co pilots or like whatever you want to call it. But at the end of the day, we talked about this on a recent episode, there's $11 trillion in U.S. wages, probably 5 to 6 trillion of that is for knowledge workers the greatest way to build wealth and, and fund all these things you got to do is you go automate knowledge work. You take a trillion, a half a trillion out of the market with take your pick, you go after that market. So yeah, I mean, this is like, this is it, this is the playbook. Like, you're going to hear more and more stories about this. Like, oh, OpenAI is training this or Google's training that, or Anthropic's training this. Like, this is, this is how it works. This is how the next like three to five years goes is like you just pick industry at a time, vertical at a time, and you just go train a model to do that work.
1:09:16
All right, next up, we got a peek at the Sora 2 product roadmap. So Bill Peoples, who's heading up Sora at OpenAI, outlined this roadmap this week on X and mentioned a bunch of updates that are coming. So the first big one is the addition of character cameos, which is going to let users bring their pets, toys or even generated characters into new videos. The app will also highlight trending cameos in real time, showing you what's popular across the the platform with people kind of putting their own likeness into these videos. They're also introducing basic video editing tools, starting with the ability to stitch clips together. And people said that very powerful editing features are on the way soon. They're also expanding social features. They're testing community channels for universities, companies and clubs, giving users ways to collaborate beyond the global feed. They're also improving, making the feed smoother and faster, doing lighter moderation apparently, and doing some ongoing performance upgrades. And there's an Android app release that will be coming soon. So, Paul, this is pretty interesting. Like they're clearly super excited about where Sora 2 is going. Doesn't sound like they really care about the unfurling backlash against Sora 2 and just full steam ahead.
1:10:43
Yeah, I don't remember if I said this on the last episode or if this was on our trends briefing with our AI Mastery members last week, but I am so unexcited about Sora. Like, the technology itself is incredible. Video generation I'm very excited about. I see enormous potential in it once you get over the issue of stealing people's copyrights and things like that and fair use. But the idea of an AI generated stream of stuff on an app is so unexciting to me. And I get that there may end up being a billion users on this platform over time and that it's like making Meta nervous and it's you know, emulating TikTok and that's obviously wild, wildly, you know, successful and popular and I may not have the best taste when it comes to like what works in social media. All that being said, this is so unexciting to me. Like I. We will talk about Sora because it is. They're putting a lot of compute power behind it and they believe it's important to something. But the idea of endlessly scrolling stream of AI generated stuff is just so opposite of what I want to see coming from these labs. And like if we're being led to believe this pursuit of, to benefit humanity and like solve cancer and all these things, I get that they say this might be a part of that and they have to fund that somehow, but I really just want to talk about that stuff and not this. So again, I don't know, like it's, it's interesting tech. It'll probably lead to some disruptive stuff within marketing and advertising and all that stuff. Like I don't have doubts about that, but the idea of a social channel dedicated to it is just very uninteresting to me.
1:11:57
All right, our friend. Oh, sorry, go ahead.
1:13:40
No, that might be another one to pull. Like just like how people feel about Soar. I'd be really fascinated.
1:13:42
I think that would be. No, I think that'd be super interesting because yeah, I'd be curious as well as like even just the usage of it. Like how much time do you even spend on the feed itself? You know?
1:13:49
All right, maybe it was another one next week. We'll see. All right, last topic.
1:13:57
Awesome. Last topic this week is about Tesla. Tesla's VP of Autopilot, Ashok El Swami has basically given a really cool overview on X of how Tesla is betting everything on end to end AI. So it kind of goes into how the company is approaching full self driving. So unlike many autonomous systems that have kind of a modular setup with separate components, Tesla trains a single neural network that directly maps camera pixels, audio, navigation and motion data to steering and acceleration commands. Basically, they argue this approach captures human like decision making better and scales more efficiently. He also shared examples of how AI chooses make certain decisions while driving, which are actually really interesting even if you're not interested in the technical pieces of this. Super cool. To see that and to train and test this intelligence, Tesla uses a ton of fleet Data, advanced generative 3D modeling and a neural world simulator capable of rendering entire driving scenes in real time. Now we're getting to kind of why we're talking about this, because Ellis Lamy says the same architecture underpins Optimus Tesla's humanoid robots. And Paul, during our prep this week, you also mentioned this had some parallels to how you see autonomy playing out in the business world.
1:14:01
Yeah, Just a quick note on this, and I've mentioned this in the past, it's probably been a little while since we talked about this, but I watch their self driving very closely because I think it has tremendous parallels to how this all played out in business. So for years, the way Tesla kind of assesses the improvement of the technology is like miles per intervention or disengagement. So when the human driver has to take over, and I've had. This is my third Tesla now, so I've been monitoring the self driving for seven years and it was like very incremental in its improvements and it always had these like really annoying things where you're constantly having to intervene or disengage the, the full self driving. I would say I've gotten now to the point where it's. It's probably about 95% of my driving is full self driving with no disengagements. Now there will still be a couple random ones, but it's starting to do things like you just wouldn't expect. Like when I was coming back the other day, like it stopped for a squirrel, like, and the squirrel wasn't even in the middle of the road. The squirrel was running through the grass by the curb and the car slowed itself so it sensed or saw a small object coming that wasn't in its way and actually anticipated that it might run into its way. First time I've seen that happen. There's stories of it, like routing around the puddles and things like that. Like you're just starting to see it do things you wouldn't expect to, where you. Less and less, you're still there, still hands on the wheel, but like less and less do I actually have to disengage the thing? And I think that's how AI agents will work in business. We will. You're going to be disengaging a lot. You're going to always be kind of watching them like you're doing the wrong thing and like you're going to say stop and like restart this or no, you got to go this path. And then I think over time, profession by profession, you're just going to start taking your hands off the wheel a lot more and you're just going to like, watch the thing go and like, wow, I haven't had to touch it in an hour. And A half. Like it's doing the thing I wanted it to do and I haven't disengaged it at all. So actions per disengagement is something I've been talking about for a couple years when it comes to agents. And so I think that as it starts to find its way into the software we all use, or the AI systems we use, or the browsers we all use, there's gonna be a lot of disengagements in the next year or two. And then profession by profession, maybe it's gonna take reinforcement learning, like the banking stuff with ChatGPT and OpenAI, you're just going to start seeing fewer disengagements and that's when jobs really start to transform. And so Tesla is so far ahead technologically from other cars. I've driven a number of other cars recently, like testing the technology. It's like seeing the future. When you get into a Tesla, it is so far beyond Cadillac and Audi and BMW, like not even comparable. And I think that's what happens here is you're going to have these platforms like a Gemini or ChatGPT that gets so far ahead and the people who are using that tech are seeing the future while everyone else is like thinking like, you know, automated cruise control is like futuristic. It's like you have no idea how far behind that is. I think that's what happens here. So yeah, just keep. We talk about Tesla a lot and in part it's because I think what they're doing in self driving translates over to automated work very clearly and it's, it helps us get like a frame of reference for how it's going to happen.
1:15:16
I love that. Paul, thanks for unpacking another busy week in AI. Appreciate it.
1:18:35
All right, and thanks everyone for joining us again. Check out Macon AI if you want to grab those on demand for 2025, get those 20 talks and then stay tuned. Next week, hopefully we'll be launching the AI pulse because I get into every week we do this. Like I want the feedback now. I want to know what people are thinking. Are we, are we crazy? Or is like everybody else feeling this? All right, thanks Mike.
1:18:40
Thanks Paul.
1:19:00
Thanks for listening to the Artificial intelligence show. Visit SmarterX AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in person events, taken online AI courses and earned professional certificates from our AI Academy and engaged in the Marketing AI Institute Slack community. Until next time, stay curious and explore. AI.
1:19:02