#173: OpenAI Dev Day, AI Gets Political, Sora Copyright Drama Continues, Gemini Enterprise & AI’s Impact on Job Hunting
Episode 173 covers OpenAI's Dev Day announcements transforming ChatGPT into an AI operating system, a Senate report warning of 100 million job losses from AI automation, and ongoing Sora copyright controversies. The hosts discuss the political implications of AI job displacement and the inevitability of full automation according to some AI companies.
- AI is transitioning from systems you can ask anything to systems you can ask to do anything, fundamentally changing how we interact with technology
- The 2026 US midterm elections will likely feature AI job displacement as a major campaign issue, with politicians using it to mobilize voters
- Companies are increasingly targeting the $13-18 trillion labor market rather than just the $300 billion software market, making job automation economically inevitable
- The integration of AI into productivity tools is reaching a tipping point where non-technical users can benefit without understanding the underlying AI technology
- Professional roles may shift from specialized positions to generalist task-based work as AI handles routine functions across multiple domains
"If you're building AI to make a bunch of money, you can go after the software industry and say, let's just replace the need for this software. But the bigger opportunity is to go after the labor itself to replace the need for accountants and auditors and lawyers and customer service reps."
"Sam Altman made it clear that ChatGPT is no longer just a chatbot. It's becoming the operating system for the AI era."
"The rapid developments in AI will likely have a profoundly dehumanizing impact on all of us. In many ways, they will actually redefine what it means to be human, fundamentally alter our relationships to each other and the very nature of what we call society."
"Full automation is inevitable. AI presents a powerful case for technology that can't easily be constrained."
"Please just stop sending me AI videos of dad. Stop believing I want to see it or that I'll understand. I don't and I won't."
If you're building AI to make a bunch of money, you can go after the software industry and say, let's just replace the need for this software. But the bigger opportunity is to go after the labor itself to replace the need for accountants and auditors and lawyers and customer service reps. So you get into this debate like, well, would people actually do that? Would companies actually just go straight after the labor? Yes, 100% they do. Welcome to the Artificial Intelligence show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Raetzer. I'm the founder and CEO of SmartRx and marketing AI institute and I'm your host. Each week I'm joined by my co host and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us as we accelerate AI literacy for all. Welcome to episode 173 of the Artificial Intelligence Show. I'm your host Paul Raitzer, along with my co host Mike Put. We are recording on an unusual date and time. We are doing this on Friday, October 10th. Normally, if you're new to the show, you might not know this. We record on Mondays. But by the time you listen to this when it comes out on October 14th. Right, Mike? October 14th, yes. We will be kicking off Macon, our annual conference that we've been talking a lot about on the show. So the 14th is the first day. So I will be again when you're listening to this, probably in AI Council meeting, we have Google Cloud, we partner with them on an AI Council. So we have a meeting that morning and then that's followed right by the workshops where Mike and I are both teaching a workshop Tuesday afternoon. So Monday for us is all about make on prep. So we decided let's squeeze in a podcast episode and recorded on Friday. So here we are. It has been a busy week. We had an OpenAI dev day. More on the SORA drama with their copyright issues. I still think there might be some more news later today, so we'll get to that on the episode after this one when we're recovering from mekon. So this episode is brought to us by AI Academy by SmartRx. We've been talking a lot about this as well. AI Academy is something we reimagined in August. We relaunched it with all new courses and certificate programs to help individuals and teams accelerate their AI literacy and transformation. These include core series like AI Fundamentals, piloting AI scaling AI as well as industry and department specific collections. And so, Mike, I know you just led the charge on creating AI for healthcare, one of our AI for Industries course series. So why don't you give us a little bit of background on that one. And again, you can learn about all of this at Academy SmartRx AI.
0:00
Yeah, Paul. So AI for healthcare is one of, like you mentioned, the industry specific courses. And what we do here is we kind of tee up at a high level how AI is impacting a specific industry and then go deep into how people in a certain industry can use our proven frameworks to transform their work and their organization using AI. So we go through a very specific methodology on how you as a healthcare professional need to be thinking about and approaching AI in your own role. And you'll come away with all sorts of use cases and ideas for tools on what, how you can actually achieve transformation in your work. So I personally have worked with a number of healthcare organizations on AI transformation. We have a ton of healthcare organizations in our audience and in our set of learners with AI Academy. So I was just a natural fit, really great industry to kind of unpack because there's so many exciting things happening in IT with AI.
2:56
Yeah, this is great. I'm excited for this one and all the other ones to come. So again you can check out Academy SmarterX AI to learn more. All right, Mark, let's get into it with the OpenAI dev day that happened. I guess it'll be next week by the time or last week by the time people listen to this, but that started the week off for us this week.
3:53
All right, Paul. So yeah, OpenAI's 2025 Dev Day happened. And at it, Sam Altman made it clear that ChatGPT is no longer just a chatbot. It's becoming the operating system for the AI era. So at this event, the company unveiled a bunch of updates that basically are starting to transform ChatGPT into a fully fledged platform where developers can build and distribute apps, much like an AI native app store. So they announced the new Apps SDK where users can now interact with apps from companies like Coursera directly inside a chat, meaning they can create, learn and transact in one seamless conversation. The other big announcement was AgentKit, which is a suite of tools for building autonomous AI agents that can perform real world tasks from managing business expenses to automating entire workflows. Altman framed this change as moving from systems you can ask anything to systems you can ask to do anything for you. There was also a bit of a surprise with a fireside chat between Altman and former Apple design legend Jony I've and they revealed that we know they've been collaborating on a new line of AI hardware but they revealed that they have been doing that for three years now. So Paul, all the updates here are obviously squarely focused on developers. That's no surprise. But maybe walk us through why do these matter for the non developers listening?
4:12
Yeah, I mean the agent builder, it sounds like is still requires quite a bit of technical capability. I don't think the average person is going to go in there and start, you know, building agents. But a few things that jumped out to me one, they they're now latching onto the 800 million chat GPT users. They're sometimes not direct on how many users they have, but that was a talking point. All week I listened to a couple of podcasts where they were using that number as well. So you know that that's a, that's a big number, that's a big number of active users. And so when they do things like this, when they introduce, you know, agents and apps and the connectors, they're doing it to a very broad audience which means it can start to change the way that people behave. People build things, people get, you know, productivity done. The, the apps themselves, if you're curious, it's in Settings and then you go to apps and connectors and that's where you can see them and turn them on. I assume they will make a more intuitive interface for that. Probably more along the lines of like how the GPTs work where you can go in and search the marketplace, I guess. But for these initial eight or 10 partners that they had at launch, including Booking.com and Canva and Coursera and Expedia, Figma, Spotify, Zillow, I think are some of the ones I saw you just go in and you connect them. Now this, this was brought to my attention, I saw this on LinkedIn. Natalia, I think was the lady that had tagged me in this one of our listeners and it's something we talk about a lot on the show about. Be careful before you connect, before you add these apps to your, your account you have to understand what you're giving up when you do this. So my understanding right now is this is only on personal accounts you can use these apps. I they don't said they said in the release later this year we'll launch apps to Chat, GPT, Business, Enterprise and Edu. So if you're in your business account, you're looking for this you would. It might only see settings, connectors. You might not see the apps there, but in the personal account, I think is where this lives right now. So when you go to connect one. So I went in and I was like, all right, let me just see what this looks like. And I chose Coursera to see. So the idea there is that as you're interacting with ChatGPT, you may come across like, oh, I want to learn about this topic. And they may recommend to you a Coursera course. That's kind of how it would be integrated to do that sort of thing. Data is getting shared both ways. And so again, sort of like user caution. You have to understand what the companies you're Connecting to. Your ChatGPT account Get access to what you're giving up. So when you go to to connect one of these apps, it says you're in control. ChatGPT always respects your training data preferences. Apps may introduce risk. ChatGPT is built to protect your data. But attackers may Attempt to use ChatGPT to access your data in the app or use the app to attempt to access your data in ChatGPT. Data shared with this app. Now, this is the important part. By adding this app, you allow it access to one, basic information typically shared when you visit a website, such as your IP address and approximate location, and two, data from your ChatGPT account, including from conversations and memories. Our policies require that apps only access relevant content to respond to your requests. So you're giving them access, in theory, to anything you do and say and any memory that ChatGPT has about you, because that enables them to then serve up more targeted recommendations and eventually ads to you. So, again, it's, you know, cool tech, but as this tech moves forward, we just always have to keep in mind what is it that we're actually giving up and do we trust the third parties that we're connecting to? And not only that, when it starts to get into the business situation, Mike, where we can turn these things on for employees, what is the risk of the data getting leaked out now? And it just compounds. And this is why, you know, it's always important to have it involved, to have legal involved, to have the right parties at the table. When you make decisions from a business perspective about what connectors and what apps you're going to enable.
5:38
Yeah, that's a super important reminder. And I know for a fact, unfortunately, there's plenty of companies we've encountered or work with who don't have the, even the beginnings of a policy or a plan for how you're supposed to be using the tools that they're actively turning on for employees, right?
9:49
Yeah, and it's just a, it's a gray area right now. Like it's hard to know and, and it becomes more complex to understand even which ones do have access to your stuff. So I, I mean, even when I was trying to figure this out, I was going into our own account. I'm like, what, what is currently connected? What, what usage is there with these things? So, yeah, it's just really important to keep in mind.
10:07
All right. Our next big topic this week is a new Senate report is delivering one of the starkest warnings yet about the impact of AI on American jobs. So according to a new analysis led by Senator Bernie Sanders and the Senate Health, Education, labor and pensions committee, AI and automation, they estimate, could eliminate nearly 100 million U.S. jobs over the next decade. This study, which we'll link to in the show notes, was conducted with ChatGPT assisted modeling, interestingly enough, and predicts that up to 89% of fast food positions, 64% of accounting roles, and nearly half of trucking jobs could vanish as what they call artificial labor reshapes the economy. Sanders then wrote an op ed in Fox News arguing that the technology's current trajectory will allow corporate America to wipe out tens of millions of decent paying jobs, cut labor costs and boost profits. The report itself actually cites Amazon and Walmart as examples of this, noting they're expanding use of automation alongside sweeping layoffs. Democrats, in response here are calling for major policy interventions, including a 32 hour workweek, profit sharing, and what they call a robot tax, to ensure that AI's games don't further concentrate wealth among billionaires. Republicans generally, by contrast, are warning that heavy regulation could slow down innovation and hand China an edge in the AI arms race. So, Paul, I mean, this couldn't come at a more interesting time. You've been saying for a while now that AI is going to get political, especially as we head into next year's US Midterms. Now this really seems like Democrats are putting a stake in the ground on.
10:27
AI, definitely what we've been anticipating. I feel like the economy has become a weekly topic and I'm not so sure that politics isn't going to become again. And if you're new to the podcast, our approach is political neutrality, like we are all about. What is the relevance of AI to the conversation, regardless of what side of the aisle it's coming from? It's just trying to be as fact based and neutral on all of this. As we possibly can and present the information. So my feeling has been for a While that the 2026 midterms in the US that AI was going to become a major campaign issue. And this is further evidence to me that that is definitely going to be. And this is the kind of language you would use when you're trying to gauge how interested people are and can we move votes as a result of the conversation? Because if they don't think it can move votes, they're not going to talk about it. So again, both sides of the aisle. So in this case you have Bernie Sanders, independent who caucuses, I think is a Democrat, being very direct. And oddly enough to me, this is on Fox News like this is, this is on Fox, so a predominantly Republican leaning media outlet that, that he is, you're coming and saying, hey listen, it's going to come after all of us, all of our jobs, regardless of Republican, Democrat or somewhere in the middle. So a couple of interesting segments of this I'll just read directly from the editorial that, that Bernie Sanders wrote. Everybody agrees that AI and robotics are going to have a transformative impact on our country and the world. There are strong disagreements, however, as to what those impacts will be, who will benefit from them and who will be hurt. One thing is for sure, this is an enormously important issue that has gotten the kind of, has not gotten the kind of discussion that it deserves, which we obviously agree with the artificial intelligence and robotics being developed by these multi billionaires. So he was basically talking about Bezos and Musk and those people will allow corporate America to wipe out tens of millions of decent paying jobs, cut labor costs and boost profits. We all want more startup companies and small businesses. Keep in mind again, if you don't know the data, like 99% of businesses in the US are small businesses. They don't employ, like half of the employees work for bigger companies. But the vast majority of companies in the US is like, I don't know, 26 million or something like that are small businesses. So we need them for the economy to be strong. So he says we all want more startup companies and small businesses. But for workers that will mean very little if half of all white collar entry level jobs are eliminated over the next five years. He's citing Dario Amade, the founder of Anthropic, for that data goes on to say it's not just economics work. Whether being a janitor or a brain surgeon is an integral part of being human. The vast majority of people want to be productive members of society and contribute to their communities. What happens when that vital aspect of human existence is removed from our lives further? And now we get into some pretty deep stuff. The rapid developments in AI will likely have a profoundly dehumanizing impact on all of us. In many ways, they will actually redefine what it means to be human, fundamentally alter our relationships to each other and the very nature of what we call society goes on to finish. Bottom line an robotics will bring a profound transformation to our country. These changes must benefit all of us, not just a handful of billionaires. This is a campaign speech like, I mean, you can see this three months from now on the campaign trail being echoed by people as they start to see can we again, can we move the vote with this talking point? This brings me back, Mike, to a topic that we talked about on episode 149 in June, about this whole idea of as we pursue AGI, as we start to look at this and the models start to get more and more advanced, what is it going to impact when we look at the total addressable market of salaries in the United States? So we talked about this last week on episode 171, the size of the economy and where the incentive lies to build AI into the economy. So we mentioned at that point we cited. What's the guy's name? Alex Rumpel from A16Z. He cited the worldwide SAS market at about 300 billion per year in annual revenue and the labor market at about 13 trillion. Just to give some context to what that means. Now the numbers vary depending on what you look at, but roughly 300 to 500 billion a year in annual revenue in the SAS industry. So to make that tangible, Salesforce was 38 billion last fiscal year. Adobe 21.5 billion. ServiceNow 11 billion. Shopify 8.8 Workday 8 HubSpot 2.6 billion. So when we think about this large 300 billion plus market and you start breaking it by individual companies, you can see the revenue that's generated. And so when you're building AI to go after jobs, you look at the SaaS companies. But then when you look at the labor market and that's anywhere between 13 and probably 18 to 20 trillion. In the U.S. we're talking about registered nurses, roughly 300 billion in annual salary. Software developers, 200 billion accountants and auditors, 130 billion. Lawyers, 130 billion. Customer service reps, 120 billion. Sales managers, 90 billion. So you can, if you're building AI to make a bunch of money, you can go after the software industry and say, let's just replace the need for this software. But the bigger opportunity, tenfold bigger opportunity in some cases you could argue, is to go after the labor itself to replace the need for accountants and auditors and lawyers and customer service reps. So you get into this debate like, well, would people actually do that? Would companies actually just go straight after the labor? Yes, 100% they do. So in episode 145, we talked about a company named Mechanize. We said, okay, so it was founded April 17th of this year by AI researcher Tame Bessaroglu. And the startup's goal, according to Bessaroglu, is the full automation of all work and the full automation of the economy. When they announced the company Mechanized, a startup focused on developing virtual work environments, benchmarks and training data that will enable full automation of the economy. The investors GitHub CEO Nat Friedman, tech investor Daniel Gross, Stripe co founder and CEO Patrick Collison, podcaster Dwark Keshe Patel, who we talk a lot about on the show, Google chief scientist Jeff Dean, Shoto Douglas, who we talked about last week, the anthropic guy and a hedge fund guy. So why do I bring up Mechanize again from episode 145? Well, because they published a new blog post this week that says the future of AI is already written. So I'm gonna, I'm just gonna read some excerpts here. Mike, if you wanna go down this path, we can talk a little bit more. Otherwise, I'll just leave it with people to like, connect your own dots here. So this is a blog post from this week from Mechanize, who has already told you back in April, they wanna fully automate the entire economy and go after that 13 to 18 trillion a year, which by the way, is just in the U.S. they, by the way, mechanized says it's 18 trillion a year in the U.S. but worldwide it's 60 trillion. So that's the market they're going after. Okay, so here's the blog post from this week. The future of AI is already written. These are just a few excerpts. Innovation often appears as a series of branching choices. What to invent, how to invent, and when. In our case, we are confronted with a choice. Should we create agents that fully automate entire jobs or create AI tools that merely assist humans with their work? Upon closer examination, however, it becomes clear that this is a false choice. Autonomous agents that fully substitute for human labor will inevitably be created because they will provide immense utility that mere AI tools cannot. The only real choice is whether to hasten this technological revolution ourselves or Wait for others to initiate it in our absence. The future course of civilization has already been fixed, predetermined by hard physical constraints combined with unavoidable economic incentives. Whether we like it or not, humanity will develop roughly the same technologies in roughly the same order, in roughly the same way, regardless of what choices we make now. Then they provide a bunch of historical context to like. Basically say, it's okay that we're doing this because it's going to happen all the time anyway. People may try to steer the stream by putting barriers in the way, banning certain technologies, aggressively pursuing others. Yet these actions will only delay the inevitable, not prevent us from reaching the valley floor. We have far less control over our technological destiny than is often thought. We did not design this tech tree. It arose from forces outside of our control. The evidence for this lies in two observations. First, technologies routinely emerge soon after they become possible, often discovered simultaneously by independent researchers who never heard of each other. They gave a bunch of examples there. Second, isolated societies converge on the same fundamental technologies when facing similar problems and resource constraints. They go on to say, we do not control our technological trajectory. Full automation is inevitable. AI presents a powerful case for technology that can't easily be constrained. They've gone to say. Yet there are many who believe, or at least hope, that we can seize the benefits of AI without making human labor obsolete. They imagine that we can just build AIs that augment or collaborate with human workers, ensuring that there is always a place for human labor. These hopes are unfortunately mistaken. In the short run, AIs will augment human labor due to their limited capabilities. But in the long run, AIs that fully substitute for human labor will likely be far more competitive, making their creation inevitable. And then they say full automation is desirable. Even if you accept the inevitability of full automation, you might still think that we should delay this outcome in order to keep human labor relevant as long as possible. This sentiment is understandable, but ultimately misguided. The upside of automating all jobs in the economy will likely far exceed the cost, making it desirable to accelerate rather than delay the inevitable. And then they end with want to help accelerate the inevitable, we're hiring software engineers. So again, I share all of this to provide context to everyone that whether you believe it's inevitable or not, there are a lot of very powerful investors and very powerful leaders who do see it as an inevitability that AI will, in the coming decade, most likely be able to automate basically every job. And they want to get there first. They assume it's going to happen anyway. And so we might as well get there first, either for money and power or because they believe they have the better chance of shepherding it in, in a positive way for humanity. So they have some belief that it's going to happen. Like, let's go get there. Like an anthropic kind of mindset. Let's go get there because, like, if we build the more powerful AI, then we can kind of figure out how to help society adjust to this. Open AI has similar mindset. When you listen to Sam talk, it's kind of this, yeah, it's going to happen. Like, let's figure out how to benefit humanity and maybe all the jobs go away, but, like, we'll figure it out. So, again, like, our part of our goal on this podcast is to bring reality of what different perspectives are. And there is a growing faction of AI leaders who probably agree with mechanize, but won't say it as directly as mechanized says it.
12:14
Wow, that's a little terrifying, but really important to talk about. I think that what jumped out at me, too, and it's worth repeating, is the fact that this Bernie Sanders kind of manifesto appeared in Fox News. And you are absolutely right. It's a campaign speech. It also strikes me they might be trying to peel away people from, you know, a different political side of the aisle. This is an issue that could be used as a wedge in typically, people that might typically never support someone like Bernie Sanders.
23:50
Correct. Yeah. And again, like, you're, you're trying to find the topics that move votes. And we've talked about the complexity of the current administration right now for them to admit that this is reality, that there's a chance in the next, like, one to three years we have, like, total disruption of the job market that's on their watch. And so the likelihood of them accepting that when they've already gone all in on AI, like this administration is 100% in on AI, build it as fast and as powerful as you as you possibly can so that we can win against China. That is the, the mantra. If the byproduct of that is job disruption, which it likely will be to some degree, how can you have it both ways? So if you're the other side and you're saying, well, let's let, let's kind of take the opposite angle here. Let's talk about the job loss. I could see this getting, I think it's a big issue, but I could see it getting completely sensationalized for political purposes as well.
24:20
Yeah.
25:23
So yeah. Eyes wide open kind of thing. Yeah.
25:23
Yeah. It's interesting to note, too, any administration, regardless of political leaning, you might think, well, why would they want all these jobs to go away? And wouldn't that hurt them? It's like, well, they're paying attention to the stock market. Not necessarily. Jobs in the stock market may very well go parabolic if you cut the. If you cut labor in this way.
25:26
Unless you dramatically accelerate gdp. Like, this is why. Yeah, I mean, this is a wildly complex issue. We are not here to be the experts on every aspect of this. We are here to raise awareness for. For what is the conversations that are going on. So regardless of what you do as a listener, maybe you are an expert in the economy and, like, you're thinking deeply about this, or maybe you're the CEO of a law firm and you're thinking, do I need associates anymore? Like, the whole point is to make people aware of this and to make you realize going into 2026, this is going to be likely an issue. My guess is this will poll well. They will find that people respond to job loss and the dehumanizing of society. Like, those are some pretty powerful talking points. And my guess is the polls will show this is a good direction to push. And so you will get the extreme, basically, going into next spring.
25:45
I feel like we'll be revisiting that prediction shortly here and saying, you were right.
26:41
We will see. I don't want to be right. Like, there. There's plenty of things we say in the show where I don't want to be right. Like, I don't want the job disruption. I don't want it to be a major political issue. But, yeah, I mean, you just kind of can look out ahead and some of this stuff becomes relatively obvious where we're going to go.
26:46
Excellent. So our third big topic this week is that we're hearing now that OpenAI is now claiming that they weren't ready for the storm of controversy around the release of Sora, their new AI video generator, which we covered in the last episode. The Verge reports that, quote, OpenAI wasn't expecting Sora's copyright drama, and it didn't realize people might not want their deep fakes to, you know, be in videos or say offensive things. And CEO Sam Altman conceded that the company, quote, didn't anticipate how visceral some of the reactions would be to SORA2's ability to, say, generate copyrighted material or turn you into your own deepfake, which others can use in videos. Well, now the company is dealing with even more fallout from SORA too. So in the past week, the Motion Picture association blasted OpenAI for putting the burden on studios to opt out of copyright infringement. And they demanded immediate and decisive action from the company. Caa, one of the industry's most powerful talent agencies, issued a statement saying Sora 2 posed, quote, serious and harmful risks to their clients, intellectual property, and that control and compensation are, quote, fundamental rights. Individuals have also spoken out. Zelda Williams, the daughter of the late Robin Williams, the comedian and actor, condemned Sora to video recreations of her deceased father. People are apparently creating these and sending them to her, and she kind of had some serious backlash against that. Opening Eye does say it will soon give rights holders more control over how their characters appear and their likenesses appear. But it seems like for many in entertainment, it's only been a week or so. Damage is already done. And Paul, I guess what jumped out at me is like, it's kind of baffling that OpenAI didn't think deeply about the possible. And like, not that hard to figure out reactions to Sora too. Like, are they being completely honest here?
27:06
I don't know. We talked. So we talked at length about this in episode 171. Right? Right. Mike? No. 172, right?
28:58
Was it 172? Yeah.
29:04
Okay. 172. So you go back and listen to that if you, if you missed that episode, we get into the legal side of this and all that. So, you know, I don't want to like repeat a bunch of stuff we said last week. I, I do find it very hard to believe that they couldn't predict the, the anger of rights holders. I mean they dealt with this with their voice thing. They dealt with this with Sora the first time around. Like this isn't new. It's not like this is the first time opening. I did something that trained out a bunch of copyrighted material and then people weren't happy about it. So yeah, I think I used the word disingenuous last time. Like I just, I can't believe that they didn't see this coming. So the context I'll add this week is I listened to a podcast on a 16Z with Sam Altman. It was pretty far reaching podcast. There was quite a bit covered. We'll put the link in the show notes. But he got into like his approach to like doing these deals that have been going on the. His thoughts on AI Slop and Sora, the copyright thoughts infrastructure. Bet that they're making how they see one to two years out in the tech that other people don't know yet, which we always say, like they know what's coming, you don't see it. But specifically on the rights holders, he was pushed on this and he said, forced to guess from the position we're in today, I would say that society decides training is fair use, but there's a new model, meaning training on other people's IP is fair use, but there's a new model for generating content in the style of or with the IP of something else. So like a human author can go and read a novel and get some inspiration, but you can't reproduce the novel on your own. That's kind of like the connection he's making, which is a pretty standard argument in the AI model and company case. So then Ben asks him, the interviewer says, you talk about Harry Potter, but you can't like spit out a Harry Potter movie, basically. And Sam says yes. Although another thing that I, I think will change in the case of Sora. We've heard from a lot of concerned rights holders and also a lot of rights holders who are like, my concern is you won't put my character in enough. Now again, sometimes you listen to Sam and it's like, ah, man, like I don't know what the communications team looks like at OpenAI. I can say right now they don't probably have very much influence in how Sam responds to questions. As someone who did PR for a good portion of my early career, sometimes you can tell when people are just kind of ad libbing responses. The rights holders thing is one where they are just kind of making it up as they're going and saying whatever comes to their mind. So he said they are getting calls from people who want their characters used more in Sora. He said, I want restrictions for sure, but I have this character and I don't want the character to say some crazy offensive thing, but I want people to interact with it. That's how they develop the relationship and that's how my franchise gets more valuable. And if you're picking his character over my character all the time, I don't like that. So I can completely see a world where subject to decisions of a right holder has, they get more upset with us for not generating their character often enough than too much. And I was just like, come on man, like I get it. Like if we're talking about an emerging character or IP and like you want to get that character out. Like, Mark Cuban allowed himself to be cameoed this week and everything shows up as an ad for his company, which is hilarious, like very Mark Cuban esque to like pull an idea like that off. But there is no way Disney, like all these brands are calling and saying, oh yeah, like bastardize our IP more. So I don't know, like, again, it's like a. It just doesn't seem very honest. Like all this being said, like we're fans of the tech. Like, the tech's incredible. I could see it being transformative for social media and business. Like, this is not a criticism of the technology itself or where it goes. We're in a very messy stage when it comes to intellectual property law and what the labs are going to do to push the, the limits in the, in the near term. I think at one point I responded to somebody, there was like a VC or somebody who tweeted something about how brilliant it was to just put this out there, knowing the backlash was coming, but they would seed it and they would get to them on the, like, this was just a genius strategy. And I said, it's unfortunate that the smartest strategy means the most unethical strategy. Right. So we're not debating what they did worked. We're not debating the text. Incredible. I'm just saying it's unfortunate that the point we've arrived at in society is that the AI labs have to do the most unethical things all the time. Because if they don't, it's like the mechanized argument. Well, if we don't do it, Matt is going to do it, so we got to do it first. And it's, it's not going to change. Like this is now the world. We're in this race to constantly one up each other. But we have to deal with these complex issues. Like the Zelda Williams thing you mentioned. Like there's two quick quotes out of that one. She said, please, this is a quote. Please just stop sending me AI videos of dad. Stop believing I want to see it or that I'll understand. I don't and I won't. If you're just trying to troll me, I've seen way worse. I'll restrict and moved on. But please, if you've got any decency, just stop doing this to him and to me. To everyone, even full stop. It's dumb. It's a waste of time and energy. And believe me, it's not what he'd want to watch the legacies of real people be condensed down to this vaguely looks and sounds like them. So that's enough. Just so other people can churn out horrible TikTok slop. Puppeteering them is maddening. You're not making art, you're making disgusting, over processed hot dogs out of the lives of human beings, out of the history of art and music and then shoving them down someone else's throat, hoping they'll give a little thumbs up and like, it gross. It's, it is an extreme, but I totally get it. Like you, you sympathize. Like, I hadn't really thought about that. I was seeing like the celebrity, like Michael Jackson and like you start seeing all these people who are deceased and as someone who's not connected to them, it's just like, oh, that, that's a silly use of that. But then there's the human side of like, no, these people have kids and now like they're, they have to, like their parents are being brought back to life through these things. And I don't know, it's like there's so much in society we have yet to face of where this technology takes us. And I think this was a very personal look at like, what can actually start to happen as this just becomes spread and people don't think about the human side of it.
29:06
I think the emotional side of this gets really messy really quick. I was talking with Claire on our team about the Zelda Williams article and we were like raising all these questions like, who owns your likeness when you die?
35:48
Who?
36:01
I mean, in the case of celebrity, there's an estate and stuff. What about, like, us? What about our parents? I'm sure there's going to be battles between siblings at some point of like, hey, should we create videos of mom and dad when they're gone? Right, Yep.
36:01
Yeah, we've talked about that, you know, a while back. Like, you know, my concerns around the, the more personal side of like, people being, you know, recreated digitally in perpetuity and, and what that means society and psychologically. And yeah, I mean, there's just, there's endless paths to go down, endless threads. And this is why we say with this show is like, our job is to present sort of this macro level of what's going on so that people who listen can be like, you know what? I'm, I'm really passionate about that thing. And then like go and become like a subject matter expert on elements of AI. Like that's the opportunity for a lot of people, a lot of listeners is be the expert in your domain, be the expert in your community, be the expert in your family. Like, pick the threads that are interesting you and go deeper than we can go on every thread in the show.
36:16
All right, let's dive into some rapid fire topics this week. So first up, Google has just launched Gemini Enterprise and they're calling Gemini Enterprise a comprehensive AI platform that brings the best of Google AI to every employee through an intuitive chat interface that acts as a single front door for AI in the workplace. So this platform is powered by Google's Gemini AI models. Any user can use Gemini Enterprise to build custom AI agents with no coding required. And those agents can securely pull data from things like Google Workspace apps, Microsoft 365 apps, and other tools like Salesforce and Box. Gemini Enterprise also comes with pre built agents for tasks like data science, software development, and customer engagement. And it comes with some new governance tools. There's one called, for instance, Model Armor, which scans and filters prompts to keep things secure and compliant across the organization. Gemini Enterprise costs $30 per month per user and is rolling out now. So, Paul, this seems like a pretty big move actually from Google. What does this mean for enterprises?
37:02
So we saw Agent Space demoed when we were at the Google Cloud event, I think was in April this year. And it's just incredible now. So, like, at a high level, this is the kind of stuff we've been talking about since spring of 2023. So like shortly after ChatGPT emerged, we started getting like previews of what Google and Microsoft plan to do to integrate AI technology into our productivity tools that we all use every day. This idea of being able to build agents with no code is incredible. So this is like very promising. Now I will say as a Google Workspace customer, I have no idea if we have this. If I have to go get this. I don't know if I have to change our plan if I can only get it as Google Enterprise. Because we have AI, we have Gemini in our Google Workspace. We have a business standard account. I kind of tried to figure this out and I would consider myself relatively savvy on this stuff. I have no idea. And I spent like 20 minutes this morning before the podcast trying to solve this because there was one point where you could request access. And so I was like, all right, let me try that. And so then it pops. It's like, okay, you have access to Google business. I was like, I already have Google Business. Like, what's going to happen when I, when I click the next thing? So then I went into the Gemini app and I was like, well, maybe I have to do this through Gemini. And I see something that's upgrade to Gemini AI Ultra, which goes from $20 a month to 200amonth per user. I was like, what is that? Like, is, is that different than. So I, I, I truly actually have no idea. Mike, if you and I can have access to this stuff at any point with our account.
38:13
Right.
39:50
So to be continued. But just that was one of my channels and I read the Sundar post, I read the post from Thomas. I, I read everything I could read and I still don't actually know how to get access to this or if we even can have access to this. All that being said, I will also say on the positive, I had, I wouldn't call it a life changing experience yesterday, but I had an incredible experience. So on Thursday again we're recording this on Friday, I am in crunch mode as everyone on our team is preparing for Macon. And the first day on the 14th, I have a three hour AI council workshop. And then I have a three hour AI innovation workshop. For the AI council workshop, we did a survey of AI council members. So I was set to go through dozens of, of responses and hundreds of questions to summarize in preparation for that council meeting time. I don't have, I, I have to get the keynote done. And so I go into the Google form which we used to complete the survey and I see an option above the questions and when I'm looking at the back end and it says summarize and I was like, oh, there's a summarize button in here. I was going to do this in Gemini, like copy, paste, copy paste. And so I click the summarize button and in three seconds I have like five beautiful bullet points. I start scanning all the replies, like, oh my God, they nailed it. Like, this is a perfect summary. So I did that for all nine sections of the survey and copy, paste, copy paste, copy paste. Putting into the deck. And now we're just going to talk through these AS accounts. So it's not, this isn't getting published. It's not like a final product. But that alone saved me at least two hours Thursday morning to just go through and be able to click the summary button. That's the promise of this, that ability where the Gemini capabilities are baked right into the application, the software you're using every day, to the point where if I was a non AI literate person and I just saw summarize and I just click a summarize button. I don't even have to know it's Gemini. I don't even have to know it was AI. I just know that all of a sudden Google Forms just wrote this thing for me. And that's incredible. So that's the promise of where this kind of technology goes. Again, I think just from a user perspective, a little more clarity of do we have this? Can I get it? That would have been very helpful. Yeah.
39:51
And for anyone listening who is not a heavy Gemini user, you might be sitting here thinking like, well I've used ChatGPT. It doesn't like summarize it perfectly or whatever, go use Gemini. I'm not even just plugging them because we do some stuff with Google, but Gemini is quickly becoming my go to model. It's so incredible. It's extremely good at not hallucinating things. It is extremely intelligent. It's breathtaking. So if you haven't used Gemini heavily, I'd highly recommend trying.
42:06
And we do expect Gemini 3 within the next 30 days. Not because we work with Google and we know these things. This is like public rumors is that Gemini 3 is imminent and likely, you know, certainly before Thanksgiving. It sounds like maybe a lot sooner than that. And the other thing I will say is if you're a Microsoft customer, this is the same kind of thing that they're doing there. They just like last week they had an announcement around integration into Excel. So you're starting to see the AI assistants and agents become truly functional and valuable embedded into the productivity tools regardless of what platform you're using.
42:34
All right, next up, Google has also released something called Gemini for Home, which is an update that replaces the Google Assistant on your smart displays and speakers and upgrades the intelligence that powers your smart devices in your home. So you can now just talk naturally instead of memorizing these preset commands. So something like turn off all the lights except the office and the devices just work because they now have Gemini intelligence baked in. You can do things like ask for a half remembered song or tell a device to add ingredients for pad tie to my shopping list. And Gemini just figures all this out on its own. This upgrade also makes home cameras genuinely intelligent. So instead of generic motion alerts, Gemini now provides full AI written descriptions, things like hey you USPS driver just left a package. A new home brief summarizes your day's footage and you can search video history by simply asking something like did I leave the car door open? For even deeper interaction, Gemini Live also now enables free flowing human like conversations for things like brainstorming, meals, parties or routines in real time. The rollout of this begins this month and the advanced features are bundled under a new Google Home Premium subscription starting at 10 bucks a month. Now, Paul on the Surface. This is a really cool addition to Google's AI capabilities. It certainly made me like start taking notes about like, maybe I should make a smart home with Google devices. This would be fun. But I think you'd also kind of flag this topic as one that has some like bigger picture lessons for where AI is going.
43:10
I do have Nest cams at home and at the office so I have the personal experience with the current generation. Yeah, the thing that I thought was interesting here Mike, and just sort of the bigger picture is this thing I've been sort of starting to call Omni Intelligence. This idea that AI is integrated into every part of our personal and professional lives. But the key is it actually understands and can take action. So the idea with the Nest cam is not only can you talk to it, but it's able to actually go do things, it's able to change things on your behalf. This is the example I've used with Teslas and Groq. So Teslas now have Grok xai's AI assistant built in. It can't do anything that yet though. It's like talking to ChatGPT in your car. But you can see where it goes like so the, the best example I can give in Tesla is if I'm using the full self driving in Tesla and, and it's, let's say I'm using it to drive to my kids school and I decide I want to reroute it. I can't click a button on my car and say take, take this route. Instead I have to disengage this self driving and I have to take the wheel and go the different route. You can't tell it to do something different, it can't take an action. It is very obvious that that is what they're going to enable within the Tesla that sometime, probably the next three to six months I will be able to talk to Grok and say grok reroute me through the valley. Grok do do this or Grok don't change lanes. There's construction a half a mile ahead. I can see it. You can't see it yet. It doesn't do that yet. It doesn't take action based on our conversation. So what's going to happen is you're now seeing it with Google, you're going to see it with Apple and Apple Intelligence within their home systems. You'll see it in cars where the AI now understands what you want and it can take actions through software and hardware as a result of it. So the idea of Omni Intelligence is that the AI is everywhere and in everything. But also the reason I call it Omni Intelligence, because that's what when we got the 4.0 model from OpenAI, that's what they called the O stood for was omni, meaning the model's ability to reason and eventually take action across all these modalities. Text, audio, vision. So that's where I think we're going as society is this idea of Omni Intelligence where the models are able to do things across all modalities, but they're also just embedded into everything we do and everywhere we are and you're able to talk to them and they're able to do things on your behalf. So yeah, I just think it's interesting to see this stuff starting to find its way into the hardware. I assume that's what Jony, I've and Sam are also working on is more of this Omni Intelligence kind of stuff that's always on, always listening, always there for you and can actually do things on your behalf.
44:42
All right, so next up, more companies are relying on AI to screen resumes and job applications. But the New York Times reports that some candidates have started to hide secret instructions in their resumes and applications that tell tell AI tools to rate them as well. Qualified recruiters told the Times that this trick has become surprisingly common. Greenhouse, a major hiring platform, estimates that 1% of all resumes it processed this year included hidden AI prompts. Manpower Group, the largest US staffing firm, detected concealed text in roughly 10% of applications scanned by its systems. On social media, users are trading tips for prompt hacking their way past automated filters. Some people claim it works. One recent grad said she went from a single interview to six after adding hidden prompts suggested by ChatGPT. As one British recruiter put it, it's the wild, wild west right now. Now, Paul, it does sound like plenty of companies are now aware of the problems that AI can cause or the complexities it can introduce during the hiring process. But I, I can't shake the sense they might not be moving fast enough or thinking big enough when it comes to like re architecting how their hiring processes work. What do you think?
47:35
Couple of levels on this one. One is just of the HR side. Like, you know, I don't know that HR is moving fast enough for sure. Yeah, like understanding this at a deeper level, understanding all the nuances. Certainly a lot of larger enterprises are probably like very in tune to this and figuring it out, but my guess is a lot of SMBs and stuff have just no idea that this stuff's going on or how all this works. So that's certainly one item. Two is just when new technology emerges, people find ways to take advantage of it. That a 16Z interview that I mentioned that again, we'll throw in the show notes. Sam was asked about people trying to game the system to get their brands and information to show up within ChatGPT. Yeah, and he said honestly, like three, six months ago, it wasn't even something we were thinking about and now an entire cottage industry is basically cropped up trying to to game ChatGPT system just like happens in search where you try and like find the hacks to get to the top of the search results. So human nature, people will always try to find shortcuts, they will always try to take advantage of systems and the people with technological abilities and knowledge generally have an advantage while everyone else is catching up. So I guess there's like a more of a macro level moral of the story here, in addition to the HR specific story.
48:49
Hey, all right. Also related to careers and jobs, we're seeing a couple big predictions about how AI is changing the nature of work and the skills that are needed to compete in the economy. So at the Masters of Scale Summit, LinkedIn's chief economic opportunity Officer, Anish Raman said to the audience that the idea of fixed job titles and rigid hierarchies is fading away fast. He predicts companies will organize around projects, not departments. He says it's a work chart instead of an org chart where people shift sideways, up or down as tasks evolve. In this world, jobs become tasks and careers move fluidly between them. At the same event, Clara Sheet, Salesforce's Service Cloud CEO, said she believes AI will collapse specialization, pushing Gen Z and Gen Alpha workers to become, quote, professional generalists rather than hundreds of narrowly defined roles. She expects most work to fall into just three categories, building products, selling products, and running the company. So both leaders, though, see opportunity amidst this upheaval. With AI handling repetitive tasks and functions, barriers to launching and scaling businesses will drop, sparking an explosion of entrepreneurship. So Paul, I thought this was important to highlight, given some of our recent conversations. It mirrors like a lot of what you've said about hiring and how job scales are changing thanks to AI. I'd love to hear you unpack a little more this idea that we might all need to become professional generalists.
50:09
This is a great debate whether we specialize and go deep on specific topics and develop domain expertise in those areas, or whether generalists are the answer. I've said before, I've always been in the generalist camp. I've always hired for generalists, I've always trained for generalists. I always wanted people with diverse knowledge sets that could, you know, connect the dots between seemingly unconnected things. Like you're just always looking for those kind of people that have that forward looking mindset. I don't know, I, I think it's, it's fascinating to see these perspectives. I'd seen this tweet from Ali Miller, you know, when she shared it from being there live and I was like, a lot to unpack here. I don't know. Just like her tweet about it was, there's just a lot of like big picture thinking. And then the quote you had mentioned about the, you know, the mind shift that's going on and you know, how we're looking more for more adaptable forward thinking, ready to learn, ready to embrace AI tools. You know, it's one of the big debates moving forward is, you know, really what, what are the most valuable skill sets? What human traits remain unique? What are the things that may make people have tremendous opportunity in their career? And I, you know, I think we always come back to take whatever your interests are, whatever the domain is, and layer AI on top of it. That is the one thing we know is no matter what you do, whether you specialize in a specific area or you are a generalist with abilities across domains, at minimum apply high levels of AI understanding and competency to that and that's going to help you the next few years. Long term, I don't know, can go back if Mechanize has their way. None of it matters in seven years. I don't know, we probably shouldn't laugh at that, but I don't know what else to do at this point. So yeah, I'm a big fan of generalists, but that being said, I don't know. And this whole idea like the LinkedIn CEO saying that jobs are basically become tasks and they're not even going to titles, won't even really be a thing. That's a weird thing to wrap your head around.
51:38
Yeah, definitely seems like it could get very weird.
53:39
Yeah, I guess that that's. But the whole point, I guess again with this podcast is we're just trying to stay on top of what's coming because I think by talking about these things and by thinking about them, you have a head start, you have a, you have a time span ahead of you to actually figure out what does this mean to you and your career and your company. And that's the positive thing is, you know, all of us, you know, me and Mike doing the show, anyone who's in our audience that listens to this show, you're in sort of that top percentage of people who are thinking deeply about these issues and trying to solve for it. And I guess take some solace in the fact that you're ahead of the curve. Like, you have time to figure this out and then help bring other people along.
53:44
Yeah. I also find these predictions really helpful, whether or not they come true, to really help you figure out, like, what questions should you be asking and answering. Like, I have no idea if we're all going to become professional generalist, but why would someone have that prediction? You would have that prediction because you'd start to think, well, AI can make me an expert at many things. Okay. Like, that's maybe the question. Ask what happens when AI can make you an expert at product, at marketing, at service? What does your work look like? That's where I get to when I look at these. I don't care if we never become professional generalists. Who cares? But that question is a question we have to answer regardless.
54:25
Yeah. And I. I actually was personally thinking about this stuff last couple days because, you know, once I get through Macon, I'm sort of diving back into being like the CEO, President and CRO and like every other title I hold within our company at the moment. And I'm s. I was starting to think about, like, how do I restructure my own professional life? And given that I have an understanding of what these advanced AIs are capable of, what does our senior level hiring really need to be in the future? If you take a collection of people who are very talented generalists who know how to work with advanced reasoning models, how much can they accomplish without necessarily having to have traditional experts in these different departments and fields? Like, can I, as a CEO function almost like an entire C suite by having AI assistants that are trained to function like a CFO and a chief HR officer? Like, that's the kind of thinking I'm doing right now is like, well, maybe, maybe our company doesn't have to look like a traditional company. Maybe the customer success team doesn't look like a traditional customer success team. Maybe the sales team doesn't look like a traditional sales team. Or the comp models aren't even don't look like that. Like, maybe it doesn't look like any of that. And we're at the point where we have that luxury to explore that. And I'm very anxious to get into this fall and have brain cycles to think about it that way. But I do think deeply about that. I don't think org charts look anything like they do, and I think there's a chance that generalist plus AI assistance is what happens, but I don't know.
55:03
All right, Paul, so I'm going to end up here with going through some AI product and funding updates to kind of close out the episode here.
56:47
Sounds good.
56:53
All right, so first up, Google has unveiled the Gemini 2.5 computer use model, which is a system that can literally use a computer like a person. This is built on Gemini 2.5 Pro. It lets AI agents click, type, scroll, and fill out forms across real websites and apps, not just through APIs. This model works kind of in a loop. It views a screenshot, reasons what to do next about it, and issues commands like click or type, and then each result is fed back in, letting it complete complex multi step tasks, even those behind logins or with dropdown menus. Next up, this one has been a long time coming, Paul. Google is adding a sharing feature to Gems, which are the customizable versions of Gemini. So you can create specialized gems very much like a GPT, such as like a coding assistant, a trip planner, whatever you kind of want to customize it for. And now you're going to be able to share it with others via a public link, which you could not do before and was a huge limitation on Gems. So creators can actually start sharing these on a public profile page as well and track how many people are using them. The features rolling out first to Gemini Advanced subscribers.
56:54
Does the. I'm looking at that post right now, Mike. It's only three paragraphs. There's not much information.
58:03
There's zero information.
58:07
Yes. This doesn't even look like this applies to like Google Workspace people.
58:09
I don't think it does yet. I have it. I have it enabled in my personal account and tested it and it worked fine. But I think Workspace is coming later. But that's pretty. That's Google's M.O. unfortunately, killing me. Yeah, we don't.
58:12
I'd be so all in on gems if we could share them even without you.
58:28
I know gems are really incredible and it's heartbreaking that it's. It's so hard, hard to share them now.
58:31
Well, all right. It's a start.
58:37
It's a start. Next up, Elon Musk's AI startup Xai is in talks to raise up to $20 billion in a new funding round that could value them at over $220 billion. The funding is reportedly tied to a deal for Xai to secure a massive supply of Nvidia's next generation Blackwell GPUs. The robotic startup figure has unveiled its next generation humanoid robot, the figure 03. The new model is faster, stronger, more dexterous than its predecessors, featuring a new hand design for better manipulation of tools and objects. Figure 03 is designed for autonomous work in logistics, warehousing and manufacturing to help address labor shortages.
58:38
I'll just say again, robotics is early. Yeah, but the advancements are coming very quickly and I just would not sleep on humanoid robots. Thoughts. Like, it's, it's. The way I think about this is like if we had this podcast back in 2017 and I was like, hey, yeah, this thing called the Transformer was just invented by the Google Brain team. Sounds really important, like, could be a little while, but like it's probably gonna like, matter a lot. I feel like we're around that time in humanoid robots. Like it's still probably gonna be three to five plus years before like all of a sudden you're seeing robots everywhere, but it's coming like they, they have, they've largely solved the major issues to humanoid robots becoming impactful. And I would just, again, if it's an area of interest for you, I would start paying much closer attention to the progress being made on humanoid robots.
59:20
I hope we're still doing this podcast when we can share. We each got our first humanoid robot.
1:00:13
They'll be available for lease for like 200 bucks a month. I would totally get that just to play around with it. Hey, I blew like 3500 on a vision Pro. Like, why not, why not try Humanoid robot, right?
1:00:19
All right, last up, Andreessen Horowitz has announced a series A investment in Further AI, which is a startup aiming to drag the trillion dollar insurance industry out of its PDF and Excel era. Insurance remains one of the most paperwork heavy sectors in business. It's got a ton of manual data entry, document comparisons, compliance checks. So further AI's technology applies generative AI directly to these workflows that's tailored for carriers and brokers. So the result is automation that actually understands insurance. All right, Paul, that wraps up a busy week in AI right before Macon here. So thank you again for demystifying everything for us and unpacking what's going on.
1:00:31
Yeah, again, next time you hear from us, we will be coming out of Macon. So if you're listening to this and it's Macon week and you're at Macon, make sure to come Say hi to Mike and I. We are doing a live podcast. Well, I guess it won't be live. It won't be streamed live, but we are recording a podcast live during Macon in the exhibit hall. So if you're on site for Macon, make sure to come by and say hello. Otherwise, we will talk with everyone next week. Thanks for listening to the Artificial intelligence show. Visit SmarterX AI to get continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in person events, taken online AI courses, and earned professional certificates from our AI Academy and engaged in the Marketing AI Institute Slack community. Until next time, stay curious and explore AI.
1:01:11