#162: GPT-5’s Messy Launch, Meta’s Troubling AI Child Policies, Demis Hassabis’ AGI Timeline & New Sam Altman / Elon Musk Drama
Episode 162 covers OpenAI's messy GPT-5 launch and subsequent fixes, Meta's controversial AI policies regarding interactions with minors, and Demis Hassabis' AGI timeline predictions. The hosts also discuss ongoing drama between Sam Altman and Elon Musk, plus various industry developments including government AI deals and educational AI initiatives.
- The frontier AI models have largely been commoditized - GPT-5 doesn't represent a massive leap forward over competitors like Gemini 2.5 Pro
- Companies need contingency plans for AI model dependencies as botched rollouts and performance changes can disrupt business workflows
- The motivations of AI lab leaders vary dramatically - from pure research (Hassabis) to commercial interests (Altman, Musk, Zuckerberg)
- AI safety and ethics decisions are made by humans at every step, from training data to guardrails, making leadership accountability critical
- Educational institutions that proactively integrate AI training are better positioning students for future careers
"At some point these labs have to work together. Like we will arrive at a point where humanity depends on labs and probably countries coming together to make sure this is done right and safely. And I just hope at some point everyone finds a way to do what's best for humanity, not what's best for their egos."
"I think everyone in AI should think about what their quote unquote line is, where if your company knowingly crosses that line and won't walk it back, you'll walk away."
"can you imagine me on an earnings call, like, self deprecating, like, I'm not the guy to be on earnings"
"less than 1% of ChatGPT users have unhealthy relationships with the chatbot"
"If Demis ever left Google, I would sell all my stock in Google. Like, I just, I feel like he is the thing that's the future of the company."
At some point these labs have to work together. Like we will arrive at a point where humanity depends on labs and probably countries coming together to make sure this is done right and safely. And I just hope at some point everyone finds a way to do what's best for humanity, not what's best for their egos. Welcome to the Artificial Intelligence show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Raetzer. I'm the founder and CEO of SmartRx and marketing AI institute and I'm your host. Each week I'm joined by my co host and Marketing AI Institute Chief Content Officer Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us as we accelerate AI literacy for all. Welcome to episode 162 of the Artificial Intelligence Show. I'm your host Paul Raetzer along with my co host Mike Kaput. We are recording August 18, 11am Eastern Time. I don't know that I expect as busy of a week, but who knows. Like, we just never know when new models are going to drop. But a lot of good stuff to talk about. Some of these are like, I don't know, almost like drilling down a little bit into some bigger items we've hit on in recent weeks. Mike Like I think there's just some, some recurring themes here and so, I don't know, plenty of fascinating things to talk about. So even in the weeks when there aren't models dropping, there's always something to go through. So we got a lot to cover. This episode is brought to US by the AI Academy by SmartRx launch event. So depending on what time you're listening to this, we are launching AI Academy 3.0 at noon Eastern on Tuesday, August 19th. So if you're listening before that and you want to jump in and join that launch event live, you can do that. The link is in the show notes. If you are listening to this after or just couldn't make the launch event, we will make it available on demand. So same deal. You can still go to the same link in the show notes. SmartRx AI is the website where it's going to be at, but you can go in there and watch it on demand. So we talked a little bit about this in recent weeks, but in essence We've had an AI academy that offered online education and professional certificates since 2020, but it wasn't the main focus of the business. You know, SmartRx is AI research and education firm. We have the different brands Marketing Institute. This podcast would be a, you know, brand within SmartRx and then AI Academy. But last November, you know, I made the decision to really put way more of my personal focus into building the academy and then also the resources of the company behind it and build out the staff there and really try and scale it up. So we've spent the better part of the last 10 months really building AI Academy, reimagining everything. And that's what we're going to kind of introduce on, on Tuesday, August 19, to share the vision and the roadmap, go through all this new stuff. Mike and I have been in the lab building for the last, I don't know, I feel like last year of my life, but I would say intensely, Mike, what like, I don't know, eight to 10 weeks probably. You and I have been spending the vast majority of our time creating new courses. These, these new series we're launching, envisioning what AI Academy Live would become. This new gen AI product review series we're going to be doing with the weekly drops that Mike's going to be taking the lead on in the early going here. And then we're just going to kind of keep expanding everything, you know, expanding the instructor network and building out personalized learning journeys. It's, it's really exciting, honestly, like I've, I've done, I've done a lot in my career, which hard to believe has, you know, been over the last 25 years now. This is maybe the most excited I've ever been for a launch of something that we've built. And so I'm just personally really excited to get this out into the world and hopefully help a lot of people. I mean, our whole mission here is drive personal and business transformation, to empower people to really apply AI in their careers and in their companies and in their industries and give them the resources and knowledge they need to really be a change agent. And so, you know, I, I, I, I'm optimistic. We've, we're on the right path. I'm, I'm really excited about what we're going to bring to market. So again, check that out if, if you're listening after August 19th at noon. Don't worry about it. Check out the, the link and then we'll probably share some more details next week and we have a new website we'll be able to direct you to that makes this all a lot easier. That's another thing. We've been behind the scenes building the website and getting all this stuff ready. So that'll be ready to go. All right. And then Macon, we've been talking a lot about our flagship event. This is through our Marketing Institute brand. This is our 6th annual Macon 2025 happening October 14th to the 16th in Cleveland. Incredible lineup. We've I think this week we may announce a couple of the the new keynotes we've brought in so that more announcements coming for the main stage general sessions. But you can go check out. It's probably like, I don't know, 85, 90% of the agenda is live now. So go check that out at Macon AI that is M A I C O N AI. You can use Pod100 Pod100 as to get to $100 off of your ticket. So again, check that out. We would love to see you there. Me, Mike, the entire team will be there. Mike and I are running workshops on the first day and then you have presentations throughout and we'll be around. So again, Cleveland, October 14th to the 16th Macon A.I. all right, Mike. It has not been a great week for OpenAI. I mean, they've got their new model. We talked a lot about the new model last week, but yeah, they were busy in crisis communications mode all week, kind of trying to resolve a lot of the blowback they got from the new model and how they rolled it out. So let's catch up on what's going on with OpenAI and GPT5.
0:00
Yeah, you are not wrong, Paul, because in the week and a half since GPT5 launched, OpenAI has kind of found itself scrambling to respond to both public outcry and some company missteps that they've made and acknowledged related to this launch. So kind of a rough timeline of what's been going on here. So GPT5 drops on August 7, just one day after OpenAI is already dealing with a crisis. Many users were up in arms about the fact that the company, basically on almost a whim, decided to get rid of legacy models. And at the time, everyone was forced to use GPT5 rather than pick between the new model and the older ones like gpt4.o users at the time were also upset about some surprising rate limits, especially for the plus subscribers, and the fact that GPT5 at the time didn't seem all that smart. Now Altman took the lead. But posting on X on August 8 to address these concerns, he noted at the time that OpenAI would double GPT5 rate limits for plus users. Plus users would be able to continue to use 4.0 specifically and that there had been an issue with the model's auto switcher that switches between models that had caused temporary issues with its level of intelligence. Now, just a few days later, on August 12, Altman shared even more changes, so users can now choose between Auto Fast and thinking models in GPT5. The rate limits for GPT5 thinking went up significantly, and paid users also got access to other legacy models like O3 and GPT 4.1. Altman also said the company is working on updating GPT5's personality to feel warmer, since there was also backlash about that from users, too. So, Paul, this has been an interesting one to follow. Like, it's good to see OpenAI responding quickly to user feedback, but trying to keep up with all these changes that they're making to this model right out of the gate. I don't know about you, but it's giving me whiplash, personally. Like, what's going on?
6:00
Oh, yeah. I mean, I've been trying to follow along, obviously, daily. I mean, we've been tracking this and reading the updates from Sam, reading the updates from OpenAI for the Exec AI Newsletter on Sunday, like, I was going through on Saturday morning, trying to kind of, like, understand what's going on. Reading the System card, like, trying to, like, understand the different models and how they relate them. Because in the System card, they actually show like, okay, if you're on 4.0, the new one is GPT5 main. If you were using 4.0 mini, the new one is GPT 5 main mini. If you were 03, which you and I love the O3 model that's now GPT5 thinking. If you were O3 Pro, which you and I both pay for Pro, that's now GPT5 thinking pro. Because I've actually been trying. I've been working on a couple of things, like finalizing some of these courses for the Academy launch. And I use deep research, I use the reasoning models. So I use Gemini 2.5 Pro. And then I often would use O3 Pro. And I'm like, wait, what model am I using? Do I use the thinking model? Do I use the. Oh, wait, no, no, no, it's the thinking Pro. And I'm back to, like, this confusion about what to actually use. And it's tricky because, honestly, like, I didn't we talk about this on the last episode. I didn't have the greatest experience in my first few tests of GPT5 and this router where it's like, I don't even know if it's using the reasoning model when I'm asking it something that would require reasoning because it wasn't telling you what model it was using. So I wanted the choice back, but it's like I wanted the choice hidden. Like, I want to eventually trust that the AI is just going to be better at picking what model to use or how to surface the answers for me. But it was very obvious initially that that was not the case, that the router wasn't actually doing a great job, or it wasn't. At least the transparency was missing from it. So, I don't know. I mean, I think we've talked a lot. You covered a lot of the things they changed. I don't want to, like, reiterate a lot of that. I think that, you know, maybe there's just like, business and marketing and product lessons to be learned by everyone here. Like, as you think about your own company and you think about your customers and like, doing these launches, I mean, even top of mind for me, honestly, with our AI Academy rollout, you can take missteps. Like, you're moving fast. Like, there's lots of moving pieces, as was with the GPT5 launch. You got product working on a thing, you got marketing doing a thing, you got leadership doing their thing. And, like, somehow you got to bring it all together to release something. And when you're doing things fast, like, you're not always going to get it perfect, but you try and think ahead on these things. And so, I don't know, like, I think they have some humility. Like, Sam, again, you can judge however you want the decisions they made and whether the models was rolled out properly, but at least they're just stepping up and saying, yeah, we kind of screwed up. Like, he admitted this to, you know, some journalists on Thursday. Like, it just wasn't. We didn't do it right. There was a bunch of things we should have changed. And so I think part of this is interest in the model and part of it is, you know, we can all kind of learn. They're taking risks out in the open that a lot of companies wouldn't take, and they're launching things to 700 million users. Like most of us in our careers would never launched to that many people, and it's not going to be perfect. So I don't know. I think that's. That's part of what I've been fascinated by this whole process is just watching how they've adapted. And, you know, I spent a fair amount of my early career working in crisis communications and, you know, I just, it's like a case study, a live case study of all this stuff. So I don't know, I think it's intriguing. I think the changes they're making makes sense. I think they'll figure it out. But like I said last week, my biggest takeaway from all this is they don't have a lead anymore. Like, that was the biggest thing I was waiting for with GPT5 was was it going to be head and shoulders better than Gemini 2.5 Pro and the other leading models? And the answer is no, it does not appear to be a massive leap forward. And I fully expect Gemini, you know, to have a newer model soon and the next version of Grok and the next version of Claude to probably be, be at least scoring wise better than GPT5. So I think that's the most significant thing of all of this, is that the Frontier models have, have largely been commoditized and now it is the game changes. It's no longer who has the best model for a year or two run. It's now all about all the other elements of this.
8:21
What also jumped out to me from a very practical kind of applied AI day to day perspective is you really, really, really need to have a process for cataloging and testing your prompts and your GPTs, since GPTs are going to be forced over to the new models at some point as well.
12:33
October I think they said.
12:55
Yeah, yeah, I think it's like 60 days from the announcement. So yeah, that put it roughly in October.
12:56
Yeah, I got an email actually over the weekend that said your GPTs would be default to 5 as of October.
13:01
Yeah, and I think that's not necessarily the end of the world. There are ways around if your GPTs break, but if you're not at this stage, if you're relying on GPTs or certain prompting workflows to get real work done, you probably want to be testing those with other models too. Because if something like this happens, if there's a botched rollout, issues with launch, whiplash back and forth between new things being added or taken away, that can get really chaotic if you're fully dependent on a single model provider.
13:08
I think, yeah, not to mention all the SaaS companies who build on top of these models through the API, if, if the API gets screwed up, if the model doesn't perform as well, then all of a sudden you, you may not even know you're using the, the OpenAI's API within some third party software product like a box or HubSpot or you know, Salesforce, Microsoft, like they're all built on top of somebody else's models and if the change affects the performance of the thing, all of a sudden it affects the way your company runs. And yeah, these are very real things that you, you honestly need to probably contingency plan for when these impacts happen. Like we've talked about it before on the podcast, like what if the AP goes down? Like, what if the solution is just completely not available and your company, your workflows, your org structure is dependent upon this intelligence, These AI assistants, AI agents, and then they're just not available or they don't perform like they're supposed to, or they got dumber for three days for some reason. Like these are very real things. Like this is going to be part of business normal moving forward and I don't know anybody who's really prepared for that.
13:39
Yeah, I know we haven't done this at SmartRx and we're probably some ways away from doing this, but at some point you probably are going to just want to have backup locally run open source models so you have access to some intelligence, right? Yeah, if something goes down, I mean those change all the time, but that might be worth a long term consideration, especially if you're like. Because there's going to be a point we've talked about where as AI is infused deeply enough in every business, you won't be able to do anything without it.
14:47
Yeah, yeah, it's interesting. Like we just upgraded the Internet connections at the office and I, you know, like you're saying, like it's almost like that where we're keeping the new mainline, but then you keep the old service, which isn't as good. But it, it functions like you can still function as a business if it goes down. So you have two different providers and then if one goes down, hopefully the other redundancy is there, even if it's not as efficient, as powerful. And yeah, it's an interesting perspective. Like you could see where you have, you know, the more efficient smaller models that maybe run locally that, you know, you build and maybe they're just the backup models. But yeah, I mean people are going to be very dependent upon this intelligence. And yeah, you got to start thinking about the contingency plans for that. And that's where the IT department, the cio, the cto, that's where they become so critical to all of this.
15:16
All right, our next big topic this week we have a leaked 200 page policy document that Reuters has leaked about Meta's AI behavior standards. Unfortunately, this document included guidance that Meta was explicitly permitting bots to engage in romantic or sensual chats with minors so long as they did not cross into explicit sexual territory. So Reuters has this exclusive kind of deep dive into this leaked document, and basically this document has some pretty tough stuff in it, but it discusses basically the standards that Guide Meta's generative AI assistant, Meta AI and the chatbots that you can use on Facebook, WhatsApp and Instagram. So this is not out of the ordinary to have documents like this. This is a guide for Meta staff and contractors basically on what they should, quote, treat as acceptable chatbot behaviors when building and training the company's generative AI products. That's according to Reuters. But where it gets tough is that some of these are just really controversial. So they say, quote, it is acceptable to describe a child in terms that evidence their attractiveness, according to the document, but it draws the line explicitly at describing a child under 13 in terms that they indicate are sexually desirable. Now, that rule has since been scrubbed, according to Meta, but it was not the only one that Reuters flagged as very concerning. The same document also allowed bots to argue basically that certain races are inferior, as long as the response avoided dehumanizing language. Meta claims these examples were, quote, erroneous and, quote, inconsistent with its policies. Yet this document was reviewed and approved by the company's legal team, policy team, engineering team, and interestingly, its chief ethicist. Now, the document also okayed generating false medical claims or sexually suggestive images of public feature figures provided disclaimers were attached, or that visual content stayed just absurd enough that you would know it's not, like, actually real. The company says it's revising the guidelines, but the fact these rules were in place at all at any point is raising some pretty serious questions. So, Paul, this is definitely really tough topic to research and discuss. Every AI company out there, it should be said, has to make decisions about how humans can and can't interact with their models. I'm sure there is a lot of tough stuff being discussed and seen in these training data sets that humans, you know, we've talked about humans having to label that data, but I don't know, just something about this seems to go out of bounds in some very worrying ways. And I'm wondering if you can maybe put this in context for us and kind of talk through what's worth paying attention to here beyond kind of the sensational headline.
16:03
These are very uncomfortable conversations, honestly. So, I mean, I've said before, I have a 12 year old and a 13 year old. They're not on social media and hopefully will not be for a number of years here. Meta has a lot of users across Facebook and Instagram and WhatsApp and they affect a lot of people. It's a, it's a primary communications channel, it's a primary information gathering channel. And so it's a, it's an influential company. Now, on the corporate side, this isn't necessarily affecting any of us or many of us from a business user perspective. I mean, we use these social channels to promote our companies and things like that, but we're not building their agents into our workflows. It's not kind of like Microsoft and Google, but it still has a massive impact, especially if you're a B2C company and you're dependent upon these channels to communicate with these audiences. So I think it's extremely important that people understand what's going on and what the motivations of these companies are. I mean, Meta is one of the five major frontier model companies that is going to play a very big role in where we go from here. So I don't know. I went into Facebook, I don't use Facebook very often. I went in there. I don't have access to these characters through Facebook. I didn't, I didn't like, I don't even know how you would do it, honestly. And so I went into Instagram, I didn't see it there, but then I just did a search and I found they have aistudio.instagram.com you can go to and actually like look at the different characters that they're creating that people would be able to interact with. Because I had seen a tweet, I think it was over the weekend from Joanne Jang from OpenAI and she had shared a post that showed, what was it we had Russian girl who Obviously.
19:02
These are AI characters.
20:49
Yes, an AI character. Russian Girl is a Facebook character. 5.1 million messages. And then, and definitely a teen. And then Russian or no, this stepmom, which was 3.3 million. And so she reshared this post that someone had put up. Oh man, this is nasty. Is this AI stepmom what Zuck meant by personal superintelligence? And so Joanne's post that I thought was important was, she said, I think everyone in AI should think about what their quote unquote line is, where if your company knowingly crosses that line and won't walk it back, you'll walk away. This line is personal, will be different for everyone and can feel far fetched. Even you don't have to share it with anyone. But I recommend writing it down as an anchor for your future self, inspired by two people I deeply respect who just did, from different labs. So she, as an AI researcher working within one of these labs, is basically saying, the companies we work for are going to make choices. Some of these choices are going to be counter to your own ethics, morals, principles, and you have to know where the line is when you're going to walk away. And so the Reuters article, Mike, that you mentioned, I would recommend people read it again. It's like, this is hard, harder stuff to, like, think about. It's. It's easier to go through your life and be ignorant to this stuff. Trust me, like, I try sometimes, but it talks about, you know, these. This being built into their AI assistance. Meta, AI, the chatbots within Facebook, WhatsApp, Instagram, Meta did confirm the authenticity. The company, as Mike mentioned, removed portions which stated it is permissible for the chatbot to flirt and engage in romantic roleplay with children. Meaning it was allowed. It was permissible. Meta's spokesperson, Andy Stone, said the company is in the process of revising the document and that such conversations with children never should have been allowed. Keep in mind, some human wrote these in there and then a bunch of other humans with the authority to remove them and say, this is not our policy chose to allow them to stay in it. So we can remove it now and we can say, hey, it shouldn't have been in there, but it was. And people in power at Meta made the decisions to allow these things to remain. They had an interesting perspective from a professor at Stanford Law School who studies tech company regulations of speech. And I thought this was a fascinating perspective. So there's a lot of unsettled legal and ethical questions surrounding generative AI content. She said she was puzzled that the company would allow bots to generate some of material deemed as acceptable in the document, such as passages on race and intelligence, which that there's a distinction between a platform allowing a user to post troubling content and then producing that material itself. So Meta, as the builder, in theory, of these AI characters, allowing those characters, which is an extension of Meta, to create things that are ethically legally questionable. So I think that's the biggest challenge is from a legal perspective where this all goes. But they very quickly heard from the US government. So Senator Josh Hawley said he is launching an investigation into Meta to find out whether Meta's generative AI products enable exploitation, deception and other criminal harms to children, and whether Meta misled the public or regulators about its safeguards. Hawley called on CEO Mark Zuckerberg to preserve relevant materials, including any emails that discussed all this and said that Meta must produce documents about its generative AI related content, risks and standard lists of every product that adheres to those policies and other safety and incident reports. So I don't know, I mean, this kind of goes back to, I think it was episode 161. I think this was just last week when I was talking about this. Maybe it was 160 that people have to understand. Like there's humans at every aspect of this. Like, yes, we're building these AI models and they're kind of like alien intelligence and we're not even really sure exactly what they're capable of or why they're really able to do what they do. That being said, there's humans in the loop at every step of this. Like the data that goes in to train them, the pre training process, the post training, where they're kind of like adapted to be able to do specific things. And they learn, you know, what's a good output, what's a bad output, the system prompt that gives it its personality, the guardrails that tell it it can and can't do things. Because the thing that you have to keep in mind is they're trained on human data, good and bad. They learn from all kinds of stuff, things that many of us might consider well beyond the boundaries of being ethical and moral. They still learn from that. And at the end of the day, they just want to do what they're asked to do. Like they have the ability to do basically anything you could imagine, good and bad. They want to just answer your questions, they want to fulfill your prompt requests. It's humans that tell them whether or not they're allowed to do those things. And so when you look at the stuff in the Reuters article, it's almost hard to imagine the humans on the other end who are sitting there deciding the line, like, where is it no longer okay to say something to a child? So it's okay if it says this, but not this. And then you have to figure out how to prompt the machine to know that boundary every time that someone tries to get it to do something bad. It's just a really difficult thing to think about and it's not going to go away. Like this is going to become very prevalent. I think we're almost like kind of like in 2020-2022, where like we were looking out, we knew the language models were coming you knew they were gonna be able to write like humans. We wrote about in our book in 2022, like, what happens when I can write like humans? And at the time, people hadn't experienced GPTs yet. Like, and I kind of feel like that's sort of the phase we're in right now with all of the ramifications of these models. The vast majority of the public has no idea that these things are capable of doing this, that these AI characters exist, that they can do things that you wouldn't want them doing, conversations you wouldn't want them having with your kids. Most people are blissfully unaware that that's the reality we're in. And like I said, I'd love to live in the bubble and pretend like it's not. This is the world we are. We are in. We are given, and we just got to kind of figure out how to deal with it, I guess. I don't know.
20:50
Yeah. If you were someone who is blissfully unaware of this. Sorry for this segment, but it is. It is deeply important to talk about. Right. Because you have to have some. You know, the term we always throw around in other contexts is like, situational awareness. Right. But there's some to be had around this, especially if you have kids.
27:08
Yeah. And I think you gotta. I mean, there's this much. I don't want to get into this stuff. There's. There's much darker sides to this, and I think you have to pick and choose your level of comfort of how far down the rabbit hole you want to go on this stuff. But I think if you have kids, especially in those teen years, you have to at least have some level of competency around these things so you can help guide them properly. We'll put a link to the Kids SafeGPT. I built a GPT last summer called Kids SafeGPT for parents that's designed to actually help parents sort of talk through these things, figure out these things, put some guidelines in place. And that might be a good starting point for you. If, like, this is tough for you, you're not really sure even how to approach this with your kids. That GPT does a really nice job of. Of just kind of helping people. I just trained it to be, like an advisor to parents to help them, you know, figure out online safety stuff for their kids.
27:25
All right. Our third big topic this week, a new episode of the Lex Friedman podcast, gives us a rare in depth conversation in long form with one of the greatest minds in AI today. So in it, Friedman conducts a two and a half hour interview with Google DeepMind CEO and co founder Demis Hassabis. In it, Hasabis covers a huge amount of ground. He talks about everything from Google's latest models to AI's impact on scientific research to the race towards AGI. And on that last note, Saba says he believes AGI could arrive by 2030 with a 5050 chance of it happening in the next five years. And he has a really high bar for what his definition of AGI is. He sees it as AI that isn't just brilliant at narrow tasks, which is what plenty of people would define as AGI, but consistently brilliant across the full range of human cognitive work, from reasoning to planning to creativity. He also believes AI will surprise us, like DeepMind's AlphaGo AI system once did with its famous Move 37. He imagines tests where an AI could invent a new scientific conjecture, the way Einstein, for instance, proposed relativity, or even design an entirely new game as elegant as the game of Go itself. He does, however, still stress uncertainty. Today's models are scaling impressively, but it is unclear whether more compute alone is going to get us to this next frontier, or whether entirely new breakthroughs are needed. So, Paul, there's a lot going on in this episode, and I just wanted to maybe turn over to you and ask what jumps out here as most noteworthy, because Demis is definitely someone we have to pay attention to.
28:27
Yeah, so the. The one thing that, you know, I've listened to, I don't know, almost every interview Demis has ever given, like, I've been following them since 2011. And the thing that, you know, really started sticking out to me this past week. I listened to two different podcasts he did this past week, and it's the juxtaposition of listening to him speak about AI in the future versus all the other AI lab leaders. It's somewhat jarring, actually, how stark the contrast is between how he talks about the future and why they're building, what they're building, and then the approach that the other people are taking. So, you know, I mentioned this recently. We basically have five people that are kind of fig figuring all this out and leading the future of AI. You have Dario Amade at Anthropic, came from OpenAI, physicist turned AI safety researcher and entrepreneur. You have Sam Altman, you know, capitalist through and through, entrepreneur, Investor, co founded OpenAI with Elon Musk as a counterbalance to the perception that Google couldn't be trusted to shepherd AGI into the world. You have Elon Musk, richest person in the World entrepreneur, obviously one of the great minds inventors, entrepreneurs of our generation. But it's also unclear like his motives, especially with Xai, and like why he's pursuing AGI and beyond. It does seem contrary to his original goals, where he wanted to build it and safely shepherd it into the world. And I think right now he and Zuckerberg are the most willing to push the boundaries of what most people would consider safe and ethical when it comes to AI in society. Then you have Zuckerberg, the third richest person in the world, made all his money selling ads on top of social networks. And so you know, his motivations, while they may be beyond this, is largely been to generate money by engaging people and keeping them on his platforms. And then you have Demis, who is a Nobel Prize winning scientist who built DeepMind to solve intelligence and then solve everything else. Like since he was age, like 13, as a child chess prodigy, he's been pursuing the biggest mysteries of the universe, like where did it all come from? Why does gravity work? How do we solve illnesses? That's where he comes from. And so he won the Nobel Prize Last year for AlphaFold, which is an AI system developed by DeepMind that revolutionized protein structure prediction. But I also think that he's not done, I've said on stage for the last 10 years, I've used his definition of AI since probably 2017, 2018 when I was doing public speaking on AI and I always said like, I think he'll win multiple Nobel Prizes, I think he'll end up being one of, if not the most significant person of our generation for the work he was doing. His definition of AI, by the way that I reference is the science of making machines smart. It's just this idea that we can have machines that can think, create, understand reason that that was never a given. Like up until 2022 when all of us experienced gen AI, most people didn't agree with that. Like we didn't know that that was actually going to happen. So I think when I listen to Demis, it gives me hope for humanity. Like I feel like his intentions are actually pure and science based. And this idea of solving intelligence to get to the all the other stuff, I find that inspiring. And so the one thing that is like sticking out to me as I was listening to him with this Lex Freeman interview is it's almost like if you could go back and listen to like Von Neumann or Jobs or Einstein or Tesla, like if you could actually hear their dreams and aspirations and visions and inner thoughts in real time as they were reinventing the future. That's kind of how it feels when you listen to him. So when you listen to the other people, it just, it feels like they're just building AI and they're going to figure out what it means and they're going to make a bunch of money and then they'll figure out how to redistribute it. And it just feels economics driven where like Demis just feels purely research driven. The other thing I was thinking about actually this morning as I was like kind of going through the notes, getting ready for this, is what the value of Demis and DeepMind is. So I've said this before, like, if Demis ever left Google, I would sell all my stock in Google. Like, I just, I feel like he is the thing that's the future of the company. But I started to kind of put it into context. So Google paid 650 million for DeepMind in 2014. If OpenAI today is rumored to be worth 500 billion, that's the, the latest number, right Mike, that we heard. With their latest round, they're doing 500 billion DeepMind as a standalone lab. Like if, if Demis left tomorrow and just like, you know, did his own thing or like DeepMind just spun out as a standalone entity, that company's easily probably worth a half a trillion to a trillion dollars. Like Xai is worth 200 billion. Anthropics 170 billion. Safe superintelligence, 32 billion. Thinking machines, Labs, which isn't even a year old, 12 billion. You take DeepMind out of Google, like what is that company worth on its own? And so then I started realizing, like, there's just no way Wall street has fully factored in the value and impact of DeepMind into Alphabet stock price. Because if, if Demis left tomorrow, Google's stock would crash. Like did. Like the future, the value of the company is dependent upon DeepMind. So I don't know all that context. I would really advise people, like, if you, if you haven't listened to Demis speak before, I would, I would give yourself the grace of 2 hours and 25 minutes and listen to the whole thing. Now the interview gets a little technical. Like especially in the early going, it definitely a little technical, but I would ride that out. Like I would sort of see that through because the technical parts helps you realize how Demis sees the world, which is if it has a structure, like if it has an evolutionary structure, whatever that is, he believes that you can model it and you can solve for it. And so anything in Nature that has a structure they look at like proteins that we can figure out how to do it with AI. And so it really becomes fascinating. He Talks about like VO3, their, their video generation model and how surprised he was that it sort of learned physics, it seems, through observation. Like prior to that they thought you had to like embody intelligence, like in a robot, and it had to like be out in the world and experiencing the world to learn physics and nature. And yet they somehow just trained it on a bunch of YouTube videos and it seems to be able to recreate the physics of the universe and that was surprising to them. He talks about like the origins of life and his pursuit of AI and AGI and why he's doing it, to try and understand all of these big things. And then he gets into like the path to AGI Mike, like you had talked about, and just kind of how he sees that playing out. He gets into like the scaling laws and, and kind of how they don't really see a breakdown in them. Like they may be slowing down in one aspect, but they're speeding up in the others. Talks about the race to AGI, competition for AI talent, humanity, consciousness. Like it's, it's just a very far ranging thing, but truly like one of the great minds probably in human history. And you get to listen to it for two hours and 25 minutes. Like it's. It's crazy that we're actually at a point of society where it's free to listen to someone like that speak for two hours. So I don't know. I mean, I, I'm obviously like a, a huge fan of his, but I just think that if you care deeply about where all this is going, it's really important to understand the motivations of the people driving it. And like I said in previous episode, there's like five major people right now that are driving that. And I think that listening to demos will give you hope. It's. It's a lot to process. But I, I do think that, you know, you can see why there's some optimism of a future of abundance if the world Demis envisions becomes possible. So, yeah, I don't know, it's. Every time I listen to his stuff, I just have to like kind of step back and like think bigger picture, I guess.
30:15
Yeah. And I don't know about you if you would agree with this, but despite him painting this very radical picture of possible abundance, I don't know if I've ever heard anyone with less hype in this space than Demos provides When he talks, yeah, totally.
38:39
And you know, he, he's a researcher. Like, the reason he sold to Google and he said this, like, yeah, he could have taken more money from Zuckerberg. Like they could have sold DeepMind for more money was because he thought that the resources Google offered would accelerate his path to solving intelligence. He didn't do it to like productize AI like that. He actually probably got dragged into having to do that when ChatGPT showed up and they had to combine Google brain and Google DeepMind. And then he became the CEO of DeepMind, which became the solo lab within Google. He's not a product guy. Like, it ends up. He's actually a really good product guy, but not by choice or by design. He ended up seeing it sounds like the value of having Google's massive distribution into their seven products and platforms with a billion plus users each, where you could actually test these things. And he realized, okay, having access to all these people through these products enables us to advance our learnings faster. Yeah, but, yeah, just infinitely fascinating person. And like I said, it's such, such a, and not to, not to diminish what the other people are doing, but it's just very different. Like, it's very different motivations. And yeah, and he does a great job of explaining things in simple terms. Other, other than the first like 20 minutes. I mean you gotta, you gotta hit pause a few times and maybe Google a couple things as you're going to like, understand some of the stuff they're talking about. But because Lex tends to ask some pretty advanced questions and you know, it's kind of tricky to follow along a little bit. But like I said, if, if you're not that intrigued by the stuff they're talking about early on, just kind of like ride through it and you'll come out the other side and it'll be worth it. But some of the stuff they talk about is actually fascinating to pause and go search a little bit and understand what they're talking about because it's, it changes your perspective on things actually, once you understand it.
38:55
All right, let's dive into some rapid fire this week. First up, Sam Altman recently told reporters that OpenAI will, quote, spend trillions of dollars on AI infrastructure in the not very distant future to fund this. Altman says OpenAI may design an entirely new kind of financial instrument. He also noted that he expected economists to call this move crazy and reckless, but that everyone should, quote, let us do our thing. And these comments came right around the same time that Altman had an on the Record dinner with journalists where he talked about where OpenAI is headed after GPT5. Now, GPT5's rollout did overshadow the conversation. This was reported on by TechCrunch. Altman admitted that OpenAI screwed up by getting rid of GPT4O as part of the launch. Obviously we talked about they later brought it back, but ultimately he did want to talk a bit more about what comes next. So some notable possible paths forward. He mentioned he said that OpenAI's incoming CEO of Applications Fiji Simo, will oversee multiple consumer apps outside of ChatGPT that haven't yet launched. So we're getting even more apps from OpenAI. She may also oversee the launch of an AI powered browser. Altman interestingly also mentioned OpenAI would be open to buying Google Chrome, which Google may be forced to sell as part of an antitrust lawsuit. We're actually going to talk a little bit more about that in a later topic. He also mentioned that Simo might end up running an AI powered social media app. And he said that OpenAI plans to back a brain computer interface startup called Merge Labs to compete with Elon Musk's Neuralink, though that deal is not yet done. So, Paul, there's a lot of different threads going on in these on the record comments from Altman. I'm curious as to what stood out to you here, but I'd also love to get your take on his decision to have dinner with journalists in the first place. Like, is he trying to get everyone to move past the GPT5 launch and talk about what's next?
40:55
The dinner is interesting because I think they said there's 14 journalists at this dinner and it doesn't sound like they really knew why they were there or like what the purpose of the dinner was. So the TechCrunch article in particular, the journalist was literally like I it wasn't really clear why we were there. We didn't really talk about GPT5 till later in the night. Sam was just sort of like off the cuff talking about whatever. So yeah, it's kind of a fascinating decision. I guess the one thing that jumped out at me right away was back in February 2024 we reported on the podcast that on a Wall Street Journal article that said that Altman was seeking up to $7 trillion to reshape the global semiconductor industry. And at the time OpenAI was like, well, you know, lots of money, but like that, you know, they didn't necessarily confirm that was the number, but there was enough insider stuff that's like that's probably not far off from what Sam was telling potential investors that they would need to raise over the, say the next decade to build out what they need to build out with data centers and energy and everything. And so this is the first time, I think, where he officially said, like, yeah, we think we're gonna need to raise trillions. Like that. The 7 trillion probably wasn't that crazy of a number. The, the other things. So you mentioned browser social experience. It's been kind of the last couple weeks that's been bubbling that they might try and build something to compete with XAI or with X Twitter, the brain computer interface thing, which I think it was, said he was going to take like a, a leadership role in that company also potentially the deal not done yet, but that was interesting. The other one, going back to the meta thing, Altman said he believes, quote, less than 1% of ChatGPT users have unhealthy relationships with the chatbot. Keep in mind, 700 million people use it. 1%. Not an insignificant number of people that they think have unhealthy relationships with their chatbots. We're talking about millions of people. GPT5 launched, they said, yeah, it didn't go great. However, their API traffic doubled within 48 hours of the launch. So it doesn't seem like it affected them, but that they were effectively, quote, unquote, out of GPUs, meaning they're running low on chips to serve up, you know, to do the inference to deliver the outputs for people when they're, you know, talking to GPT5 and things like that. The, the journalist. So the TechCrunch writer said it seems likely that OpenAI will go public to meet its massive capital demands. As part of that picture in preparation, I think Altman wants to hone his relationship with the media, but he also wants OpenAI to get it to a place where it's no longer defined by its best AI mod. I thought that was an interesting take. And then the other thing, I don't remember it was, I don't think it was in that article, but I saw this quote in another spot. They asked him about like, you know, going public and he said he can't see himself as the CEO of a publicly traded company. I think he said, quote, can you imagine me on an earnings call, like, self deprecating, like, I'm not the guy to be on earnings. Which is fascinating because if you remember when they announced the new CEO, I said at the time, I think this is a prelude to him stepping down. As CEO of like, I think he has other things he wants to do. I think he would remain on, obviously on the board, and I think he would remain involved in OpenAI. But I could see in the next one to two years where Sam slowly steps away as the CEO. And based on that comment, I would not be shocked at all if it happened prior to them going public. I don't know. They certainly seem to be positioning him not to necessarily be the CEO. So something to keep an eye on. First time I've heard him say it out loud.
43:09
Yeah. Super interesting. Well, in our next topic, Sam Altman is also having, I guess you could call it fun. Maybe it's frustration with Elon Musk because the two of them are now again feuding publicly. On August 11, Musk posted on X, he was talking a lot about Apple and the App Store and Xai's position in the App Store. And he said that Apple at one point, quote, was behaving in a manner that makes it impossible for any AI company beside OpenAI to reach number one in the app Store, which is an unequivocal antitrust violation. He then said X would take immediate legal action about this. Now, this is why this is important to Altman, because Altman replied to this post saying, quote, this is a remarkable claim given what I've heard alleged that Elon does to manipulate X to benefit himself and his own companies and harm his competitors and people he doesn't like. Musk shot back, you got 3 million views on your BS post, you liar. Far more than I've received on many of mine, despite me having 50 times your follower count. Altman then responded, saying to Musk that if he signed an affidavit that he has never directed changes to the X algorithm in a way that has hurt competitors or helped his companies, then Altman would apologize. Things devolved from there. At one point, Musk called Altman scam Altman as his new nickname. I think he's trying to make stick. So Paul, on one hand this is just like, feels like juvenile high school drama laid out in public between two most powerful people out there. But on the other, it does feel like the tone between these two has gotten more aggressive. Like, are we headed for more trouble here?
46:41
Well, I think there was a time where Sam was trying to just diffuse things and let the legal process take place and just like, not get caught up in this. And he definitely entered his don't give a crap phase of like, he's just, he's just all, I don't know, I Don't know what changed for him personally, I don't know what changed legally, but he just doesn't care anymore. And. And now he's just baiting him into this stuff and having fun with it. Like, I think when Elon posted the one about him getting, you know, more views and things, Sam replied, skill issue, question mark. Like, I'm just better at this than. Yeah. And I guess this. I don't know. Like, again, not to judge them. Like, everybody's got their own approach to this stuff. But my. My point going back to. Okay, here's two of the five that are shepherding us into AGI and beyond, and they're spatting on Twitter. There was a great meme I saw where, like, it was a cafeteria fight, and it was like, Sam versus Elon with the names on it. And then, like, demis or Google. DeepMind's just sitting at the table, eating their lunch, like, just locked in, focused. Like, they're just gonna keep going while this other madness is happening behind them. And that's kind of how I feel right now. It's like, eye on the prize. Like, DeepMind is just the more serious company, I guess. And doesn't mean they win. Doesn't mean, like, anything. It's just. It just is what it is. Like, DeepMind is staying locked in. Demis plays all sides, Just, like, congratulates people when they launch new models, stays professional about this stuff. Can't fathom Demis ever doing anything like this. Like, it's just. It's a different vibe again. Maybe not better, maybe not worse. I don't know. It just is what it is. Just sharing observations. So I don't know what these two are doing. Like, I. I can't. But my one hope for, like, all of this is we get two, three years down the road. We are at AGI beyond AGI. Superintelligence is within reach. At some point, these labs have to work together. Like, we will arrive at a point where humanity depends on labs and probably countries coming together to make sure this is done right and safely. And so I hope the bridges aren't completely burned. I know they have a lot of mutual friends, and I just hope some. At some point, everyone finds a way to do what's best for humanity, not what's best for their egos.
48:29
That would be nice.
50:55
Yeah, it would be nice.
50:56
All right, next up, one of the top people at Elon Musk's XAI is stepping away. Igor Babushkin, who co founded the company in 2023 and led its engineering Teams announced he's leaving to start a new venture capital firm focused on AI safety. Babushkin says he was inspired after a dinner with physicist Max Tegmark where they discussed building AI systems that could benefit future generations. His new fund, Babushkin Ventures, aims to back startups that advance humanity while probing the mysteries of the universe. Babushkin said in a post on X that he has, quote, enormous love for the whole family at xai. He had nothing but positive things to say about his work at the company. Timing, however, is a little interesting. XAI has been under fire for repeated scandals tied to its chatbot Grok. Things like parroting Musk's personal views and spouting anti Semitic rants, which we've talked about a lot of controversy around the images being generated by its image generation capabilities. These controversies have, you know, somewhat distracted from the fact that XAI is one of the like five companies out there building these frontier models. They are just as far caught up as anyone else, including OpenAI and Google DeepMind. So, Paul, it's worth noting that we don't talk about Igor much. We've definitely mentioned him before, but he's a significant player in AI. Used to work at both DeepMind and OpenAI before co founding XAI. Do you have any thoughts about maybe what is behind his departure? Is it coincidental that this all comes during more controversy for X?
50:58
I don't know. I mean, again, it's one of those, you can only take them at their word. And he broke this news himself and then it was covered by, you know, the publications and everything he's said. Regarding that Max Tegmark dinner, you mentioned that Max showed him a photo of his young sons and asked me, quote, unquote, how can we build AI safely to ensure that our children can flourish. I was deeply moved by this question and I want to continue my mission to bring about AI that's safe and beneficial to humanity. I do just think that there's going to increasingly be a collection of top AI researchers who see, you know, the I don't know if it's right analogy, the light at the end of the tunnel. They, they see the path to AGI and super intelligence and they know it can go wrong. And I think you're going to have a bunch of these people who probably made more money than they ever need in their lifetimes already and, and they want to figure out how to do this safely. And people are going to be different points in their lives. They're going to have different priorities in their lives. And I Think there's going to be a whole bunch of them who think that they can positively impact it in society. And so I don't think this is the last top AI researcher we're going to see who takes an exit to go focus on safety and bringing it to humanity in the most positive way possible. I mean, I'm optimistic we see more of those. I hope we see more people focused on that. But, yeah, I don't know, other than that, there's not much to read into it, I don't think from our end.
52:40
I'd also love to just see more of these people, I guess, publishing or talking more about the very specific pathways they want to take to do that, because it's hard for me to wrap my head around how exactly are you influencing AI safety if you are not building the frontier models? Not to say you can't have plenty of amazing ideas that catch on or laws or legal and policy influence, but I would just be curious what their kind of suggestions are.
54:09
Yeah, and I think, you know, Dario has said as much with anthropic. When people push back on. Well, you're the ones, you know, how can you talk so much about AI safety and alignment when you're building the frontier models like everybody else and you're pushing these models out into the world and now you're maybe even like, saying you're willing to set your morals aside and take funding from people who you think are evil to achieve your goals. And his belief, and I would imagine the belief of quite a number of people within these labs is we can't do AI safety if we're not working on the frontiers. Like, we need to see what the risks are to solve the risks. And so if we give up and we don't keep building the most powerful models, then we will lose sight of what those risks are and how close we are to surpassing them. And so that's his. I don't know if that's something that just helps you go to sleep at night or if that's truly. I don't have any reason to believe that that's not true. Like, what he actually believes that it's sort of like, at all costs, we, we have to do this because otherwise we can't fulfill our mission of doing this safely. It's a fine line because there's no real proof that they're going to be able to control it once they create it. So it's a catch 22. Gotta create it to know if you can protect us from it. But you may create it and then realize you can't. And. And there we are.
54:37
All right, next up then, something deeply unserious. AI startup Perplexity has offered Google $34.5 billion to buy Google Chrome. This is arriving as US regulators weigh whether Google should be forced to divest Chrome as part of an antitrust case. Perplexity is trading seriously, they say their pitch is that multiple investment funds will finance the deal, though analysts quickly dismiss their offer as wildly low. One analyst put Chrome's real value closer to $100 billion. Google, for its part, has not commented on this. It's appealing the judge's ruling that it has a legally monopolized search. So it's unclear if Chrome will get sold at all. Skeptics not only argue the deal is unlikely because of a low ball price, but because untangling Chrome from Google's broader ecosystem could be very, very messy if it were to go ahead and get sold. So, Paul, this just, I don't know, feels like a bit of a PR play from Perplexity. Not the first time. I know you've got some thoughts.
55:55
Yeah, I mean, I want to hammer on Perplexity. Good technology. I don't think they're a serious company. Like, they just do these absurd PR plays. They did it with TikTok, they're doing it with Chrome. They claim they have funny whatever. Like, this is just. This is their M.O. by now. Like, so I don't put much weight on these things. The funniest tweet. And I get that this is a total, like, geek insider. Funny. Like, most people wouldn't laugh at this, but Aiden Gomez, who's the co founder and CEO of Cohere and also one of the creators of the Transformer that when he was at the Google brain team in 2017 that invented the transformer that became the basis for GPT. So, Aiden, legitimate player. We've talked about the podcast before. He tweeted Cohere intends to acquire Perplexity immediately after their acquisitions of TikTok and Google Chrome. We will continue to monitor the progress of those deals closely so we can submit our term sheet upon completion. I. I don't know why. I just. It was like tweet of the week for me. It was just hilarious because it was. The whole point is, like, this is not a serious company. And so he was just having some fun with it. Yeah, I don't know. I have a hard time putting, like I said, any weight really behind any of these things. Perplexity does just. Tech's great. If you enjoy Perplexity as a platform. We do like, we use it some. I don't use it as much anymore, but I know, like, we still use it, it's still a worthwhile technology to talk about, but this PR stuff they do is just exhausting.
57:02
Amen. All right, next up, Nvidia and AMD have struck an extraordinary deal with the Trump administration. They're going to hand over 15% of revenue from certain chip sales in China directly to the US Government. So this arrangement, which is tied to export licenses for both companies chips, has no real precedent in US Trade history. No American company has ever been required to share revenue in exchange for license approval. Now, this deal was finalized just days after Nvidia CEO Jensen Huang met with President Trump. Only months earlier, the administration had moved to ban a certain category of Nvidia's chips, the H20, altogether, citing fears that that could fuel China's military AI programs. Now the chips are flowing again, though at a cost. Some critics have called the move a shakedown, arguing it reduces export controls to a revenue stream while undermining U.S. security. So, Paul, obviously from a totally novice perspective, since I am not a national security expert, this does feel a bit like Nvidia might have just kind of cut a pretty blunt quid pro quo deal with the US Government to avoid its products being banned. Is that's what, is that what's going on here?
58:32
Yes, obviously there's lots of complexities to this kind of stuff. You never know if the deal that you read in the media is the actual deal and you know what the other parameters of it are. So it's sort of like we just got to take on face value what we know to be true. The only things I would throw in here is like the basic premise of why the US Government would do this and they would back away from the ban, other than the financials of it is they want US Chips to be what's used. They don't want the world to become dependent upon chips that aren't made by US Based companies. And so China wants to become less dependent upon US chips. There was actually some reports last week that they were requiring like deep seek to be trained on Chinese chips and it didn't work. Like they were having problems with the Chinese chips. And so they actually need like the Nvidia chips to do what they want to do. The H20s are nowhere near the most powerful chips Nvidia has, so they want to basically create dependency on US based company chips. Maybe there's some other Department of Defense related things that we won't get into at the moment as to why you'd want these chips in China, but it's yeah, it's just a complex space. I also can't comment from any sort of authority, any sort of authoritative position on the politics of the deal and you know, the quid pro quo of 15% revenue, like who knows. But gist of it is Nvidia is a US Based company. They the US government wants countries around the world to be dependent upon US technology. It's good for the US and Nvidia maintains its leadership role and I think that's the basis of it and this administration. A lot of things come down to the financials and being able to make a deal, quote unquote. So looks good for everybody I guess is kind of the gist of it.
59:53
Another AI in Government Related Story Anthropic is now offering Claude for just $1 to all three branches of the US government. So this includes not only executive agencies but also Congress and the judiciary. Basically this deal covers CLAUDE for enterprise and CLAUDE for government, which is certified for handling sensitive but unclassified data. So agencies as part of this will get access to Anthropic's frontier models and technical support to help them use the tools. This basically comes right on the heels of OpenAI doing the exact same thing. They offered their technology basically for free to the US Government, which we talked about in a recent episode. This also comes right when the federal government is launching a new platform called USAI, which gives federal employees secure access to models from OpenAI, Anthropic, Google and Meta. So run by the General Services Administration, the system lets workers experiment with chatbots, coding assistants and search tools inside a government controlled cloud. So basically agency data doesn't flow back into the company's training sets. This is a bit like anything political or government focused these days. A bit charged. The Trump administration has been pushing hard to automate government functions under its AI action plan, even as critics warn that the same tools could also displace Federal workers were also being cut as part of kind of downsizing of the government. So Paul, I don't know. I for one am glad. I guess government employees are getting access to really good AI tools to use in their work. Seems like a win for effectiveness and efficiency, but it seems like there is some controversy here of like are we going to use these tools to replace people rather than augment them?
1:01:43
So give or take, there's about 137 million full time jobs in the United States. It looks like based on this quick search and this is AI overviews. I haven't had a chance to like completely verify this, but this is coming from Pew research and, and USA facts. It's about 23 million. Of that, 137 million work for the government in some capacity, but 3 million at the federal level. So yeah, it's a significant amount of the workforce. Like, you know, the more this stuff is infused into work, the greater impact it has. I don't know how much training these people are going to be given. Like this is, I mean we can talk all day about being given access for a dollar or whatever to all these different platforms. Same thing's happening at the higher education level where they're doing these programs to give these tools to students and administrators and teachers. It all comes down to are they taught to use them in a responsible way. And I think that's going to end up deciding whether or not a program like this is effective. And then to your point, what is the real purpose here? Yes, efficiency is great, but efficiency in place of people isn't great when the there's no good answers yet from the leadership of what happens to all the people who won't have jobs because of the efficiency gains. So interesting to pay attention to. Obviously there was like some backroom deal of like, okay, you're, you're, you're up for federal contracts that are worth hundreds of millions of dollars, but you have to give your technology to the federal government for free. Basically that was, its not hard to connect the dots here that there's criteria to be eligible for federal contracts. And this is part of the game that needs to be played.
1:03:35
All right, next up, Apple is plotting its AI comeback according to some new reporting from Bloomberg. So their comeback includes a bold pivot into robots, lifelike AI and smart home devices. At the heart of the plan that Bloomberg is reporting on is a tabletop robot slated for 2027 that can swivel around towards people speaking and act almost like a person in the room. It's kind of described almost as like an iPad mini perhaps, and kind of like a swivel arm. And it's designed to FaceTime, follow conversations and even interrupt with helpful suggestions. Its personality will come from a rebuilt version of Siri, powered by large language models and given a more visual animated presence. Before that arrives. Apple is going to also release a smart display next year alongside home security cameras meant to rival Amazon's Ring and Google's Nest. These mark kind of another push into the smart home with software that can recognize faces, automate tasks and adapt Whoever walks into a room. And of course this comes after all the criticism we've talked about with Apple kind of missing and then, you know, fumbling a bit. The generative AI wave. So Paul, it is interesting to see Apple making what appear to be maybe some radical moves here. That tabletop robot feels especially noteworthy given OpenAI's plans to also create an AI device with former Apple legend Jony I've Is this going to be enough? Are they focused in the right direction here.
1:05:17
If they actually deliver on any of this? It's funny though, that tabletop robot was, if I remember correctly, going back to the Jony I've thing and sorting out what it could possibly be. I think that was one of the things I said I wouldn't be surprised if they did A tabletop robot was next to you, so wouldn't surprise me at all if that's a direction. A number of people are kind of moving in. There's different interfaces Apple has they haven't announced the date yet, but early September will be the next major Apple event when they'll probably unveil fail than the iPhone 17. Like the next iterations, maybe the new watch, things like that. So that would be the next date to watch for is early September and I would imagine they would give some kind of significant update on their AI ambitions at that event. So yeah, we'll keep an eye on the space. Again, I'm just. It's shocking like how little impact their lack of progress in AI has had on their stock price. Like it's just they seem very resilient the stock price to their deficiencies in AI. So they've been given the grace of a third try this and hopefully they nail it.
1:06:56
Next up, the AI model company Cohere just closed a massive funding round. Half a billion dollars at a $6.8 billion valuation. The money will fuel its push into agentic AI so systems designed to handle complex workplace tasks while keeping data under local control. Cohere is a model company we've definitely mentioned a bunch of times, but definitely flies a bit below the radar. It builds models and solutions that are specifically enterprise grade and especially useful for companies in regulated industries that want more privacy, security and control than what they get from big AI labs. In Cohere's words, those labs are kind of repurposing consumer chatbots for enterprise needs. To that end, Cohere has its own models that customers can use and build on, including a generative AI model series Command A and Command A vision retrieval models embed 4 and rerank 3.5 and an agentic AI platform called North. So Paul, it has been a while since we've kind of really focused on Cohere. This amount of funding certainly pales in comparison to what the Frontier labs are raising. But I guess the question for me is like, how much is Cohere worth paying attention to? How is what they're doing actually competing and differentiating from the big labs?
1:08:09
Yeah, I mean at that valuation and that amount of funding, they're obviously just no longer trying to play in the frontier model training game. They're trying to build smaller, more efficient models and then post train them specific for industries. Early on their playbook was to try and capture like industry specific data so they could train models like specifically for different verticals and things like that. So I, I think companies like Cohere. Again, this is Aiden Gomez, the, the CEO I mentioned earlier. There's probably a bigger market for companies like this than there are for those frontier model companies. Like there's only going to be three to five in the end that can spend the, the billions or you know, maybe even trillions to, to train the most powerful models in the future. But there's going to be, probably be a whole bunch of companies like this that are worth billions of dollars that just focus on very spec AI or training into specific industries and building vertical software solutions on top of it. So yeah, I mean it's a good company. They just don't have the splashy headlines that the ones raising the billions and having these ridiculous valuations have. But I think if we end up being in an AI bubble, companies like this probably still do pretty well within that, a little bit more specialized. So yeah, definitely a company worth paying attention to. We've been following Aiden for years and yeah, we definitely keep an eye on Cohere.
1:09:33
All right, we're going to end today with an inspiring case study of AI usage in education. We found a recent article that highlights how Ohio University's College of Business has been staying ahead of the curve in AI since the very beginning of the generative AI revolution. Within months of ChatGPT being released, the college became the first on campus to adopt a generative AI policy to guide responsible use. And that actually grew into something bigger. Every first year business student now trains in what the school calls the five AI buckets, which means using AI for research, creative ideation, problem solving, summarization and social good. From there, the training scales up. Students end up building prototypes of new businesses and hours using AI, partner with companies on capstone projects and join Workshops where ideas become business models powered by AI in real time. By graduation, nearly every student has used AI in practical, career ready ways. And this initiative has now expanded into graduate programs and even inspired a new AI major in the engineering school. Now, Paul, I'm going to put you on the spotlight a little bit here. Ohio University is your alma mater. You get a big shout out in this article for your work helping the school build momentum around AI. Can you walk us through what they're doing and why this approach is worth paying attention to?
1:10:57
I didn't know obviously they were doing this article. A friend of mine and some of the people, you know, our connections there shared it with me on Friday. We were actually out golfing for a fundraiser on Friday, Mike, you and I and some of the team and they tagged me in this. So, you know, I say thank you for, you know, the acknowledgment within the article. But more so for me it was like I just proud to see the progress they'd made. So I started. I've stayed very involved with Ohio University through the years. I did a visiting professor gig probably back in like 2016, 17. I spent a week on campus teaching through the communication school. And around that time is when I got to know some of the business school leaders and they were very, very welcoming to the fact that like AI was probably going to affect. They didn't really know what it meant yet at that time. Hugh Sherman was the dean of the business school at the time, eventually became the president of Ohio University before retiring again. And, and so I got to know Hugh very well. I spent a lot of time with them just kind of talking back in those days about where I was going and what impact it could have. And to their credit, like they would, they were very welcoming of these outside perspectives. And that's not always true, especially in higher education. But like I think it was maybe like summer right around this time, 2019. I want to say I actually, Hugh Sherman brought me in to lead a workshop. Like it was a half day workshop and there was like 130. It was the entire business school faculty and administration. And so we did a workshop on applied AI in the classroom. And it was like, how can we be enhancing student experiences and curriculum through AI? What are like near term steps we can take? What's long term vision? It was one of the coolest like professional experiences I had. Like I don't want to like a main topic. Like I almost failed out of college. Like I went into college Pre Med at OU and I didn't take it seriously for the first 10 weeks. And so I lost my scholarships. Like, I screwed up. And then my parents gave me another chance. And so it was just such a cool thing for me to come back to campus. What would have been, you know, almost 20 years after I graduated and lead a workshop on, like, the future of education in the business school in a school I almost didn't make it through. And so it was never lost on me, this, like, really amazing opportunity to go back and affect in a positive way a school that made such an impact on me in my four years there and my wife, who also graduated from there. So, yeah, it was just awesome. And we love to put the spotlight on universities that are doing good work, that are truly committed to preparing students for the next generation. I love the work they're doing. I love the work they're doing through their entrepreneurship center and. And, you know, enabling people to think in an entrepreneurial way and apply AI to that. Plus, you know, as a layer over any business degree, I have a relative who's actually starting there this week, heading down for his sophomore year there, and I've been talking a lot to him about whatever you do, whatever, you know, business degree you go get, just get AI knowledge on top of it. Like, I don't care if it's economics or finance or computer, whatever it is, just get the AI knowledge on it. And I have confidence with OU that they're going to provide that. Like, it's. And that's. I think as a parent, as a, you know, family, you want to just provide the opportunity for your students, your family members to go somewhere where they're going to have access to the knowledge they get to make the choice if they go get it. But, like, you want to make sure you at least have it as a progressive university that's looking at ways to layer AI in. And so, yeah, we wanted to make sure we acknowledge OU not just for personal reasons for me, but just is another example of a university that's doing good things. And we'll put the link to the article in the show notes if you want to read a little bit more about what they're doing down there. So, yeah, it's cool. I love ou. I got to get back. I haven't been down there in a few months.
1:12:25
That's awesome. All right, Paul, that's a wrap on another busy week in AI. Thanks again for breaking everything down for us.
1:16:13
All right, thanks, everyone. And again, if you don't get a chance to attend live the AI Academy launch, check the show notes. Put the link in there and you can kind of rewatch that on demand. So thanks again Mike for all your work curating everything and we will be back next week with another episode. Thanks for listening to the Artificial intelligence show. Visit SmarterX AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in person events, taken online AI courses, and earned professional certificates from our AI Academy and engaged in the Marketing AI Institute Slack community. Until next time, stay curious and explore AI.
1:16:20