#205: AI Labs Refocus on Agents and Enterprise, Trump’s New AI Framework, Meta’s Rogue Agent & What 81,000 People Want from AI
This episode covers a major strategic shift across AI labs toward enterprise customers and autonomous agents, triggered by Claude's coding capabilities breakthrough. The hosts analyze OpenAI's pivot from consumer focus to enterprise partnerships, political polling showing AI as a rapidly growing voter concern, and Meta's security breach caused by a rogue AI agent.
- Claude's coding breakthrough has forced all major AI labs to refocus on enterprise customers and autonomous agents, abandoning consumer 'side quests'
- AI is now the fastest-growing political issue among voters, with 79% concerned about government's lack of job protection plans
- The compression of project timelines through AI tools requires complete rethinking of business operating systems and quarterly planning
- Enterprise AI adoption faces serious security risks as agents can take unauthorized actions that bypass traditional access controls
- Token maximization - fully utilizing AI subscriptions - is becoming a competitive advantage for knowledge workers
"All the labs realized what Claude code unlocked and it wasn't like it was the first coding agent, it was just the best."
"I am not joking. This isn't funny. We have been trying to build distributed agent orchestrators at Google since last year. I gave cloud code a description of the problem. It generated what we built last year in an hour."
"All the industries you thought weren't going to be disrupted by AI are about to be disrupted."
"Working with these agents is like simultaneously talking to a PhD student and a 10 year old."
"I sit at my desk at 2am and I feel like reality is staring at me, screaming at me, literally screaming at me, trying to tell me something."
All the labs realized what Claude code unlocked and it wasn't like it was the first coding agent, it was just the best. They did something different with the harness, like how they enabled it to do what it does. All these labs see not the finish line, but like the next mile marker, I'll say, of agentic capability and their ability to automate AI research and their ability to then, as Logan Kilpatrick's deleted tweet said, to start disrupting everything. Welcome to the Artificial Intelligence show, the podcast that helps your business grow smarter by by making AI approachable and actionable. My name is Paul Raetzer. I'm the founder and CEO of SmartRx and marketing AI institute and I'm your host. Each week I'm joined by my co host and SmartRx chief content officer Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us as we accelerate AI literacy for all. Welcome to episode 205 of the Artificial Intelligence Show. I'm your host Paul Raitzer along with my co host Mike Put. We are recording Monday, March 23rd about 10am Eastern Time. Some big stuff last week, Mike, we got, yeah, I don't know, the whole last week, just crazy. We were on a company retreat for two of the days. So I always feel like I lost track of time for the whole week and then my entire week was spent getting ready for the company retreat to create. You and I both taught workshops, which we'll talk a little bit about to the team. And then I, I did five presentations and workshops I think on the first day. So it was a little bit of a crazy week. But in between all that, we had over, I think over 50 different sources in the podcast sandbox this week. So as usual, Mike did an amazing job of curating the topics for today and we were updating what we were going to say even about three minutes ago before we came on. We still may adapt it as we're moving forward. It's just some big stuff, OpenAI and their kind of shift, but it's sort of a larger trend about what's going on with the labs. There's some new polling data about AI. I don't know, just Meta's got a rogue agent. There's just a lot to unpack this week. So this week's episode is brought to us by AI Academy by Smarter X. If you're a regular listener, you hear us talk about AI Academy a lot. This is the core focus of what I do at the company, and it's a huge part of what Mike does at the company is building the content and the curriculum for AI Academy. It's designed to help individuals and businesses accelerate their AI literacy and transformation through personalized learning journeys and an AI powered learning platform. New educational content is added weekly so you're always up to date with the latest AI trends and technologies. Our AI for Industries collection features six core series and certificates that are designed to jumpstart AI understanding and adoption across industries. The six that are available right now and they're part of the overall AI Mastery membership program or you can buy them individually. We have AI for Professional Services, AI for Healthcare, AI for Software and Technology, AI for Insurance, AI for Financial Services, and the newest one that just came out Friday, last Friday was AI for Retail and cpg. So these series are an ideal launchpad for organizations that want to level up their teams and accelerate that AI adoption and impact. Mike teaches a number of them, including AI for Professional Services. And so later on in the episode, we're actually going to get some insights from Mike of some of the big takeaways he had from that series. Again, like it was probably part of a new element of the podcast. We're going to start trying to drill into some of these, you know, the core series we're creating. We're spending so much time researching and building these things. We want to bring some of those core insights to everybody. As part of this podcast we're going to start doing some of those. So Michael, tee that off this week with AI Professional Services. So individual and business account plans are available now. Or you can buy those single courses and series as I mentioned, for one time fees. You can go to Academy SmartRx AI to learn more. All right, Mike, we have our AI Pulse this week. So this is SmartRx AIPulse. You can participate in these Pulse surveys each week. They're informal polls of our listeners where we ask a couple of questions related to that week's episode. So last week, Mike, we had Atlassian.
0:00
Atlassian? Yeah.
4:20
Atlassian laid off 1600 workers and explicitly cited the AI era as the reason. What is your reaction? So we had 39% said this is the new normal. AI driven restructuring is real and accelerating. We had 26% say it's AI washing a fast growing company using AI as cover for cost cutting. And 25% too early to tell. We need to see if the roles are truly replaced. And then 11% said, I'm more concerned about the total tech layoffs in 2026 than any single company. I don't know. Nothing really surprising there. It's a pretty balanced response overall, But I think 39% is the highest response rate. That is kind of the new normal. And then in a New York Times quiz, 54% of readers preferred AI written prose over human originals. What's your reaction? I wonder if it's the exact same people answered this. 39% said, not surprised AI has gotten genuinely good at clean, polished writing. 28% said writing quality was never the real moat. Taste, judgment and point of view are. And then we had 20% said this is a wake up call for professional writers to differentiate beyond surface quality. And 14% said the quid quiz was flawed. That it was, you know, it's kind of a relevant result. Okay, so we will give you the two Pulse questions for this week later on at the end of the episode. But again, SmartRx AI forward slash pulse if you want to participate in those Pulse surveys each week. All right, Mike, so the first one is it started in our sandbox as a bunch of OpenAI news. There was a whole lot of stuff. I'll let you unpack what happened with OpenAI across, like the 15 articles that we were looking at. And then I'm going to do my best to sort of take a zoom out and say, what is actually going on at all of these labs? Because this, I think, is there's a major shift happening. And when you start looking at the collection of all of this information at the same time, you start to kind of see the trend of where this is going.
4:21
All right, Paul, So right now, OpenAI is in the midst of executing what might be one of the more dramatic strategic pivots it's done so far. So it's simultaneously restructuring how it sells, what it builds, and who builds it, all while preparing for a potential IPO later this year. So on the enterprise side, Reuters reports that OpenAI is pursuing partnerships with multiple private equity firms in deals potentially worth a combined $10 billion. These firms include places like TPG, Advent International, Bain Capital, Brookfield Asset Management, and others. And the PE investors would contribute approximately $4 billion and receive equity stakes, board seats, and influence over how OpenAI's technology gets deployed across their portfolio companies. So the logic here is that private equity firms control massive portfolios of enterprise companies and influence their tech spending. So this partnership gives OpenAI a distribution channel directly into those businesses. Now, notably, Anthropic is also reportedly courting private equity, including Blackstone, signaling that this may become A standard go to market playbook for some of these frontier AI companies. Now, on the product side, OpenAI is consolidating its web browser Atlas, ChatGPT as a whole and its Codex coding tool into a single unified desktop, what they call super app. So Fiji simo, who leads OpenAI's applications division, confirmed this move, saying the company is cutting back on quote side quests to focus on coding and business users. At Noel hands meeting on March 16, Simo laid out the commercial goal that they want to convert OpenAI's 900 million users into, quote, high compute users by turning ChatGPT from a consumer chatbot into a productivity instrument built around Agentic AI. Now, interestingly, they're facing quite a bit of competitive pressure on this front. So according to enterprise software vendor Ramp, the proportion of businesses using anthropic increased from 1 in 25 to nearly 1 in 4 within a single year. And anthropic currently wins approximately 70% of direct comparisons against OpenAI in new enterprise contracts. Now, meanwhile, at the same time, OpenAI is also going all in on fully automated AI research. So founding OpenAI member Andrej Karpathi went viral this past week. We talked about this last week describing kind of an experiment where he deployed an autonomous AI coding agent to run continuous research for two days. It calls it Auto Researcher. And basically this agent, like we discussed, executed hundreds of experiments, discovered new optimizations, and sped up how well the model itself worked in terms of its training time. Now, interestingly, Shopify CEO tested the same approach on his internal company data, running an agent overnight that conducted dozens of experiments and improve performance by almost 20%. Now, the point here is that Karpathy says all frontier AI labs will adopt this approach, calling it, quote, the final boss battle these labs face. So there are reports that OpenAI is following suit, going all in on this idea of trying to build an AI researcher. Now, lastly, they are reportedly nearly doubling their headcount, according to the Financial Times and Bloomberg, over the next year as they scale across all of these initiatives simultaneously. So, Paul, maybe connect the dots for me here. OpenAI is making some pretty big, pretty sudden changes.
6:22
Yeah, so the trend I was referring to is this goes back to episode 189 of the podcast on January 6th. So right out of the holidays was when Claude code sort of blew up and it became very hot over those last two weeks of 2025. And we spent an entire segment of the episode talking about what was happening with Claude code and how something had definitely changed. And so that was the starting point and all the major AI labs are in this accelerating race for autonomous agents and enterprise customers. So that's the thing I reference. When we first started the outline for this podcast yesterday, there was just this focus on OpenAI. But when you look in the totality of all the articles we're looking at, all the tweets we're seeing, you see that this, everything has changed to this refocus on agents and enterprises, which was not really OpenAI's core. It's not like they weren't going after that audience and they weren't building agents before. But Claude code changed things. And you and I might can attest to this. Like, it's, it's incredible. Like within Claude, the ability to build things. I'll give an actual example a little later on in this episode. But it, it changed things and they're ahead of everyone, very clearly ahead when you use the product. So I'm just going to, I'll break down a little bit the OpenAI thing, but I want to get into the, the bigger picture. So you mentioned the Fiji Simo's talk about like, private equity firm, that they're in these advanced talks and that both OpenAI and Anthropic are aggressively courting these PE firms, which makes a ton of sense. And we've talked about this a little bit before on previous episodes, but Anthropic, as you mentioned, is winning in this space. So OpenAI's enterprise business, according to Reuters, is 10 billion out of the total analyzed revenue, about 25 billion right now. And so that's a run rate. They're not actually at 25 billion yet in a year, but that's the run rate they're on right now. And then she tweeted on March 16, this news came out a little earlier than we planned. We're excited to be building a deployment arm and we'll share more details soon. So that's what we're talking about, this idea of kind of getting out with these frontier alliances where they're actually, actually working with the consulting firms and stuff. So there's just a lot going on where they're trying to get to where the enterprise customers are. And then when it started getting into this idea of refocusing, which is interesting because I remember last fall we were talking about this like all of a sudden Sam Altman's like, everywhere they're going to do space stuff, they're going to do robots again. They're going to, you know, build the video gen apps and social networks and devices. With Jony, I've like they're just everywhere. And it was like whoa, like you're getting crushed right now on the model side. Like why don't you focus on the model side? And appears they've come to realize that. So Fiji tweeted on March 19, Companies go through phases of exploration and phases of refocus. Both are critical. But when a new bets start to work, like we're seeing now with Codex, which is their version, you know, like cloud code, it's very important to double down on them and avoid distractions. Really glad we're seizing the moment. I remember when I first saw that tweet, I was like that's weird. Like it's just a weird tone on a tweet. Like almost like people were questioning whether she was behind this focus because that's not what she was brought there to do. Like she was in part brought to diversify like based on her background. So I think some people may have taken this news as almost like a slight against what she was supposed to be doing there. I don't know but that's how I read that tweet was like, wow, that's a really interesting trying to set the tone that you're behind all this rah rah kind of stuff. So that was in relation to the Wall Street Journal article that said OpenAI Plan's launch of desktop super app to refocus simplify user experience. In that there was a quote said we realized we were spreading our efforts across too many apps and stacks and that we need to simplify our efforts. That was from cemu. That fragmentation has really been slowing us down and making it harder to hit the quality bar. We want it said top executives including Altman, Chief Research Officer Mark Chen and Simu have spent the the last few weeks reviewing OpenAI's product portfolio and looking at areas to deprioritize. And in an all hands meeting, she told employees they couldn't afford to be distracted by those side quests you mentioned and that they're in this major battle with Anthropic and it's basically like a code red internally. This is all related to this idea of a fully automated researcher, which isn't news. And we've talked about this being something they're working on for at least the last year. Yeah, but I think it's starting. The timeline is maybe starting to come more clear. So they said their new research goal is the North Star for these next few years, pulling together multiple research strands including work on reasoning models, agents and interpretability. Meaning like knowing what the models are doing and why they're doing it. And then there's even a timeline. OpenAI plans to build, quote, an autonomous AI research intern, a system that can take a small number of specific research problems by itself by September. A lot of what Andres Karpathy is talking about is sort of a prelude to this stuff. And so the AI in turn will be the precursor to a fully automated multi agent research system that the company plans to debut in 2020 date. It's a weird timeline to me. I don't know why it would be like that long. But anyway, this AI researcher, OpenAI says we'll be able to tackle problems that are too large or complex for humans to cope with. You mentioned the idea of these again, these side projects, what that means, I mean here, side projects, it could be things like the Sora video generation app, like the standalone app. I gotta think the planned hardware devices fit into this bucket. If you didn't spend the 6 billion on Johnny, I've like I, I, I. But I gotta imagine that there's a chance that you get delays in the hardware because that's hard. It's a difficult thing to pursue and that could definitely be a major distraction. And then E commerce features in chat GPT, you could see those kind of get sidelined. So there's lots of interesting things they've been doing that could get sidelined in all of this. And then you mentioned the in the same time they're doubling headcount, so they're aiming to grow to about 8,000 employees. They're at about 4,500 today, according to Financial Times. And then, you know, so overall it's like it creates this muddied relationship continuing with Microsoft as well. So again when I started zooming out, it's like, well, what's going on with all the other labs? Like we hear so much lately about the challenges Microsoft and OpenAI are having as they tried to reimagine that relationship so that OpenAI could get in a position to go public. And in the process they allowed them to start developing partnerships with people like Oracle and aws, which I'll talk about in a moment. So then we get into the Microsoft thing. Now we'll talk a little bit more about this one in a rapid fire and we'll drill into this. But the premise is Microsoft made a major shift last week where they're moving Copilot under Satya Nadella. So they're actually moving it under another executive, Jacob Andrew. But then he reports directly to Satya and they're taking Mustafa Solomon, who was in charge of Microsoft AI, and he's going to like just run the super Intelligence lab, it sounds like. At the same time Microsoft, according to Financial Times, is weighing legal action over $50 billion. Amazon OpenAI cloud deal so now we have this weird muddying of relationships between Amazon and OpenAI. You have XAI. So one of the other major labs, there's basically five major labs in the US XAI, Elon Musk tweets on March 12. This is following lots of turnover at the AI lab. A lot of the co founders have, have left in the last 60 days, he tweeted. Xai was not built right first time around, so is being rebuilt from foundations up. Same thing happened with Tesla. So you have xai, you know, one of the major labs is according to Elon Musk, in basically a complete, you know, reset mode. And this is a month after, on February 12th, they got acquired by his other company, SpaceX. So just what, 40 days ago, SpaceX, you know, who also Elon Musk's company, said on Monday it had acquired xai, the AI company controlled by Musk, to consolidate his empires and kind of build this, you know, one unified company. So that combined company now includes X, like the Twitter platform, and includes xai. And then they have a deep relationship now with Tesla, his, his other company. At the same time, Musk is suing open AI like that. And that's supposed to go to trial in like April, right? Like, I think that's moving to an act.
10:11
It is.
18:41
So you have this crazy thing, but Elon Musk is watching what's happened with agents and enterprise. He wants a piece of that and he realizes, wow, we didn't build this right way. Let's like just hit the reset button. And nobody hits the reset button faster than, than Musk. Like, he's sometimes not working. He's going to blow it up. Then you have Meta. We talked about this. So March 12, Meta delays rollout of new AI model after performance concerns. So they're spending what, over 15 billion last year just on talent acquisition. So they're investing heavily. They're rumored to be spending 135 billion this year on like capex to build out the future, everything. And it doesn't seem to be working yet. Like, they haven't released a major model since they acquired Alexander Wang and Scale AI. So meta's sort of in upheaval. They've kind of fallen off. It's like them and XAI are just sort of like down at the bottom right now. You had Yann Lecun leave Xai. But then Meta shows up and buys Multbook, the AI agent social network that went viral because of fake posts back in, you know, earlier this year. So Meta is trying to get in and have a piece of this agent game. They'd probably love to play in the enterprise world, but that's not their, their natural thing. Then you have Jensen Huang talking last week about Openclaw being the next ChatGPT. So there's a CNBC article that says Jensen Huang, the CEO of Nvidia on Tuesday pointed to a fast rising AI project called OpenClaw as a major step forward in how people interact with artificial intelligence. He said it is now the largest, most popular, the most successful open source project in the history of humanity. This is definitely the next ChatGPT. OpenClaw is an open sourced autonomous agent platform that goes beyond traditional chatbots. Instead of answering questions, these agents can complete tasks, make decisions and take actions with minimal input from users. Nvidia moved quickly to build around openclaw's momentum. The chip leader on Monday announced Nemo Claw, an enterprise grade version of OpenClaw that layers Nvidia software stack and tools on top of the platform. And then you have Google DeepMind. So Google, you know, came in hot with Gemini 3. It was great. Like it's powerful. They've, they've just last week announced some major improvements to Gemini within Google Workspace, which we experience Mike, every day we use Google Workspace and we embed Gemini. They've had kind of a runaway success with Notebook lm. Even though, I mean when you talk to the average business leader, they have no idea what Notebook LM is. So like in our bubble, Notebook is amazing. And we talk about the time we have courses on it. The average person has no idea what it is or how to use it. So they've had success building these individual apps like Notebook of M and Gemini. They announced a major investment last week in AI Studio where they're trying to get into the vibe coding game. So trying to play along with like how Claude code and stuff is. But the reality is AI Studio is still for developers. Like I can't, I don't know how to use it. I went in there last week, I was like, okay, maybe it's like ready for me to use it. And it's like, no, it's not. So Gemini, while amazing. Google DeepMind, incredible. They have no answer to Claude code right now. Like it's running circles around them. And based on what we like what a Google engineer said so we talked about this on episode 189. Yana Dogan, a principal engineer at Google on January 2 tweeted, so this is, this is them saying it, not us. He said, I am not joking. This isn't funny. We have been trying to build distributed agent orchestrators, which is exactly what we're talking about with like Open Claw and cloud code at Google since last year. There are various options. Not everyone is aligned. I still can't believe this tweet was allowed about. I gave cloud code a description of the problem. It generated what we built last year in an hour. It wasn't a very detailed prompt and it contained no real details given. I cannot share anything proprietary. I was building a toy version on top of some of the existing ideas to evaluate Claude code. It was a three paragraph description. And then when will Gemini get to this point? I think someone asked. So we were working hard on it right now, the models and the harness. And then I thought this was really interesting, Mike. So Logan Kilpatrick, who's sort of like, you know, head of AI developer relations, basically, so he's like a major player within Google. DeepMind came from OpenAI. He tweeted, I couldn't believe I saw this tweet. And I was like, holy, that's going to come down fast. And it did, he deleted it. It said, I think this is on Saturday. All the industries you thought weren't going to be disrupted by AI are about to be disrupted. They're not allowed to say that.
18:42
Like, yep, Google customers are reading that saying, I'm sorry, what?
23:08
Yeah, 100 true. You can't say that. And so someone got that down real fast. So Google's sort of like in this crazy phase where they're trying to build it into Gemini. They're trying to make it like function within the productivity tools that they have. While DeepMind is telling you that like every industry is going to be changed. So then I'll wrap here with what I think is if you, if you want to, you got to be ready for the technical stuff. When you listen to Andreas Karpathy, like he's. Mike and I talk about Andre's all the time. He ran Tesla computer vision for five years. Co founder of OpenAI did a bounce back to OpenAI for about a year now. He's an independent researcher. He's. He's been on fire on X the last like three weeks. Just like all these crazy things he's working on. But he did an interview on the no Priors podcast. Again, if you're Ready for the technical side of this? Listen to this episode. We'll put it in the show notes. A few key notes that I was. I was listening to this yesterday, actually. So a few key things. He was talking about how fast these models have evolved and how it's largely a skill issue, which is funny because that's a term my son tells me. Like, when he beats me in a video game. He's like, it's a skill issue, dad. Like. Like, if I lose in Mario Kart, I'm like, oh, it's the wrong character. He goes, no, skill issue. So I. Apparently, that's like the lingo right now. So he was saying it's a skill issue if you can't get value out of these models. There's this idea of token maxing, which is a very technical concept, but it actually makes a ton of sense. So every time you use one of these models, you're basically burning through tokens. So tokens is when. When a large language model does something or an agent does something, it's basically making predictions using tokens. Tokens are like pieces of words, in essence. And so you get an allotment of these tokens. So let's say I use a million, two million tokens, whatever. So he was saying, like, if you're an engineer, you want to know what your token budget is. Like, how much AI can I use in my job? And so this idea of token maxing is, like, for the average user, like you and me, Mike, I have a quad license, I have a ChatGPT license, and I have a Gemini license. And if I'm not maxing out my subscription every month, I'm leaving, like, intelligence and outcomes on the table. And so he was saying there's this, like, pressure right now, especially on coders, to max out your available tokens, because if you don't, you're just like, you're not getting the full value. And I think that concept is starting to. Is going to start carrying over eventually into knowledge work where you're like, we have these AI tools. We're not fully utilizing them, and we're just. We're leaving value on the table by not maxing out our tokens each month. And in a similar place, he talked about this idea of running projects in parallel, which I do. Like, I'll go into Claude and be like, okay, I'm going to give it this project. I'm going to go over to ChatGPT and I'll have it work on this project. And so there's times where I'm running three projects simultaneously with AI agents while I'm doing my other work. So like I'm doing email or something else and I've got. And so that's a big thing. And then he talked about the compression of timelines to complete projects, which I'm going to talk about in a upcoming topic here about our company retreat. But I think that's a very important concept that things that used to take five hours, 10 hours, 20 hours, now might take five minutes. And that's a weird environment to be working within. And then he also talked simultaneously about this idea of compression of software stacks where we used to have a CRM tool and a social tool and all these tools. And it's like I'm just going to have a swarm of agents and they're going to go talk to all this software and I'm just going to have like a single user interface. And then the other final one I'll say with Karpathy. And again, this is all relevant to what these labs are doing. So if you listen to the Karpathy interview, all the labs are realizing what Karpathi is realizing on agentic capabilities and they are now in a race to do what he explains in this. And that's why this, this podcast episode is so important that no Priors episode. He is telling you point blank what all the labs are trying to do with agents and you will walk away with a better understanding of the moment. But then he said at one point, working with these agents is like simultaneously talking to a PhD student and a 10 year old. So like sometimes you do something with it and it's like it was like giving it to a top PhD student and then the next moment it's like some stupid simple thing and it just can't do it. So it's that idea of the jagged frontier and the jaggedness of these models. So zoom out, recap at all the labs realize what Claude code unlocked. And it wasn't like it was the first coding agent, it was just the best. They did something different with the harness, like how, how they enabled it to do what it does. All these labs see not the finish line, but like the next mile marker, I'll say, of agentic capability and their ability to automate AI research and their ability to then, as Logan Kilpatrick's deleted tweet said, to start disrupting everything. And, and so it is an all out race for agents and they're seeing a pot of gold with enterprise adoption, which is why Anthropic and OpenAI are doing deals with PE firms. It's why they're doing alliances with major consulting firms. They're trying to get in and get where this is going to be because the labor replacement value of being the model they go to when they reduce workforces and put it all into AI models to token max to get work done, they see that future coming very fast. And it's important. Like, if. If I just covered a lot in like 20 minutes here, I think it's very, very important that you understand what we just covered. Like, that's what these labs are doing. And it's going to become very apparent, I think, in the next, like, three to six months full go where they're headed.
23:12
Probably a pretty good time to be an enterprise buying AI technology. I'm assuming these labs would like to court you.
28:54
Yeah, yeah. You get a lot of credits. Especially, like, I'll give you the first million free.
29:00
All right, next up, we've got three separate developments this week that are painting an increasingly complicated picture of how Americans actually feel about AI and how Washington is responding. So first, we had some new polling. David Shore, who is head of data science at Blue Rose Research, appeared on the Odd Lots podcast with some interesting polling data. So his organization has found that over the past year, AI rose in issue importance and its importance as an issue faster than any issue his firm tracks. It is now more important to voters than climate change, childcare, and abortion. According to their polling, 79% of voters are concerned the government doesn't have a plan to protect workers from AI job loss. 77% are concerned about entire industries being eliminated. 56% are worried about personally losing their job to AI. This is hitting at a time when 61% of Americans say life has gotten less affordable in the last year. Only 25% feel confident in their financial future. Only 34% say that they're, in their opinion, they have a secure job. So what Shore's data shows, and he's pulling from this perspective of trying to kind of find political messaging for the Democratic Party, is that this whole idea of, hey, everything's going to work out just fine, that message is dead on arrival. They actually found when leaders in government and tech say AI will not cause widespread job losses, net trust is negative. 41. And when they say AI will create economic productivity that benefits everyone, net trust is negative. 20. Now you're starting to see this play out across the political spectrum because we second up this week got dueling AI political declarations. So first, there was a coalition that involves a lot of unlikely Bedfellows including Steve Bannon, Susan Rice, Richard Branson, Ralph Nader, Yoshua Bengio and others who released the Pro Human AI Declaration. This basically called for a prohibition on superintelligence development until there's broad scientific consensus it can be done safely, as well as a number of other manifesto points about keeping AI Pro human. So over 40 organizations signed this, and they also found in their own polling that Americans would rather prefer human control over the speed of AI development by an 8 to 1 ratio. However, another organization called Build American AI published a direct counter to this manifesto titled We Cannot Afford to Pause AI. They argued safety and innovation are not opposites and the US Already has regulatory tools through existing authorities to manage AI development now. Third, the Trump administration unveiled a national AI legislative framework with seven pillars. This is a short document, but basically gives legislative guidance on how they think legislation should evolve related to AI. And this framework takes a pretty clear try first rather than regulate first posture. It opposes creating any new federal AI regulatory bodies. It defers copyright questions to the courts rather than legislating. And it recommends Congress preempt state AI regulations that impose undue burdens on developers, establishing what it calls Americans, quote, right to compute. There's an interesting part in here in shifting responsibility for protecting children online from tech companies to parents. So rather than imposing strict industry standards, they are actually shifting more to empowering parents with tools to protect kids online. The framework also calls for Congress to empower Americans to challenge federal agency efforts to, quote, dictate the information provided by an AI platform. So basically trying to make sure that there is no undue influence on what information is provided by AI. So, Paul, I'm curious. There's a number of threads going on here. If you're in the AI industry or just observing or trying to navigate these changes yourself, how are you thinking about these numbers and the moves on either end of the political spectrum?
29:06
Like any research we always talk about, you got to know who's doing the research and what their goal is, what kind of bias might be in the research. That being said, it's going to become more political. And as we've said many times in recent months, this is all trial balloons. They're trying to figure out what do Americans think about AI and is there an opportunity to move votes a few percentage points one way or the other with taking a strong position on AI, which Republicans and Democrats haven't really, for the most part, with voters? So the one thing that it's becoming more interesting to me is, you know, I always read this Research and think these people don't know what AI is. Like you're asking them questions about something that they don't understand. And now I'm actually thinking out loud here of like, that that's maybe an advantage for politicians that want to manipulate and persuade people to vote one way or the other. So if you don't know what it is, then you can create, you can
33:03
make mean whatever you want 100%.
34:02
So if AI, if people generally like, I don't know, like, whatever, then it's like, okay, let's hammer the message of it's going to take jobs and it's going to data centers are going to ruin communities. And now that's all AI is to people. So this is maybe a dangerous slope we're going down here where we're seeing the early efforts to try and gauge what is perception so that we can then influence perception of what it is to move, to move votes one way or the other. So David Shore, I didn't know who he was, I didn't know his organization. So that's always the first thing I do. It's like, okay, we see some cool data and it's getting shared everywhere on. On X. Who are these people? Is always the first thing I ask. What is their mission? So David Shore is head of data science at Blue Rose Research, based in New York, originally from Miami. I try to elect Democrats. That is his. That is his X profile. So what I just read is his X profile. So there's no hiding what the point of this is. Blue Rose Research helps campaigns make higher quality strategic decisions by democratizing access to accurate measurement. That's on their About Us page. The name Blue Rose symbolizes turning blue what is now red. So again, there's no hiding what this is for. David Shore is a prominent American data scientist, political consultant and expert in public opinion polling. Now, that doesn't mean it's not valid research. We're just saying like that there's a perspective here. That's the whole point of understanding this. He actually worked for Barack Obama's 2012 reelection campaign. So the survey, just to put it in a little bit of context, when it says AI is like the fastest growing issue, you have to understand it's actually 29th out of 39 issues right now though, right? So yes, it's growing fast. But the top five issues for Americans are cost of living, the economy, political corruption, inflation and healthcare. Those don't really move. Those are pretty common top five. Then if you go down to like 25 to 30. So just to put in context of where AI falls, you have war in the Middle east at 25, international trade, income inequality, voting rights, then artificial intelligence, then race relations. So while it is growing fast, on the surface, Americans don't really care. Like, it's not. It is not something that would jump out to you as like, votes are going to move based on that, but it is changing fast. You talked about some of these key ones, Mike. The government not having a plan to protect workers from job loss driven. So the question was how concerned are you about. And then it said the government not having a plan to protect workers from job loss is driven by AI. 79%. So you don't need to understand what AI is to like. Yeah, kind of worries me that I'm a plan. And that is 100% true. They do not have a plan like. Or if they have a plan, they're certainly not talking about the plan. So everyone should be concerned that the government doesn't have a plan. Then it said, how concerned are you about young people entering the workforce and finding fewer job opportunities because of AI? 79. They should be concerned. That's happening like that. That is a real thing right now. So again, whoever's asking these questions, Republican, Democrat, independent, doesn't matter. That is a fact. Like, it's harder to find jobs right now. Entire industries being eliminated by faster than new ones are created. That's a ridiculous question. Like, we're not getting rid of industries like companies being disrupted. Sure. Career path. So that. That's an absurd question. You could just throw that one away. AI changing the job market in a way that drives down wages for people like you. 72%. You could replace AI with any variable. Anything you ask is like, are you concerned with, you know, something driving wages down? Well, of course I'm concerned. I don't want my wages going down. So he's like, whatever. You or someone in your family losing their job in the next year because of AI, 56%. That's a reasonable concern. And then when they say when leaders in government and tech industry say AI will not cause widespread job losses, net trust, as you mentioned, is negative. 41 distrusted somewhat. 35% distrust a completely. 32%. So 67% disrupted. Distrusted somewhat or completely. Now that May align with 67% of people don't believe anything government tells you. So like, Right. So again, just framing where the data is coming from. Then there's another one, Data for Progress, which is a progressive think tank and polling firm that provides data, research and messaging strategies for progressive movement. They produce polling on policy issues and support campaigns. So they came out with new research on February 27th, which is worth mentioning here. This is 1200 US likely voters nationally using a web panel. So they're asking about how frequently they use AI in their daily lives, whether they have favorable or unfavorable views of the tech, and how confident they are in their ability to spot AI generated content. This is a pretty short survey. We'll put the link in. It's only like five pages. You can read it for yourself if you want. Some of these questions are pretty interesting. Do you have a favorable or unfavorable opinion of the following people or institutions? They asked about AI. Democrats minus 3 net favorable. Republicans plus 11, independents minus 5. They asked when it comes to AI tools such as ChatGPT. So now again they're trying to qualify for you. What, what are we talking about when we talk about AI? So if you understand what chat GPT is, at least you have some concept when it comes to it. In your personal life, have you mostly embraced or resisted using them to assist your life, or have you found areas where you could use AI in your personal life? Embraced Democrats 32, Republicans or Democrats 34? Republicans 32. Resisted. Democrats 35, Republicans 33. And then I have not found areas that I could use AI my life, Democrats 30, Republicans 32, which is like totally balanced. Like there's really nothing there that would indicate, you know, anything they can do with that data to move people one way or the other. Then they had another one. Sometimes people use AI to make fake or edited photos and videos that they post online. How confident you feel in your ability to spot that stuff? Very confident. 15, somewhat confident. 35. So that's 50. Think they can figure it out?
34:04
Oh, they do, yeah.
39:49
Right. That they can, yeah. They're not. They're wrong. Yeah. And then they, then they did an interesting one where they were like comparing data from August 2025 to February 2026, where they asked how frequently, if at all, do you use the following using AI such as ChatGPT for your job. So right now, 14, 14, say multiple times a day. 44, rarely or never. 11, a few times a month. So you have 55% of these people being pulled in February 2026, that few times a month, Rarely or never. So again, if you like, think everybody's doing this, they're not. And then the one you mentioned about the pro human AI declaration, again, it's important to kind of know where the counter is coming from. So the AI industry, super PACs. We talked about this last year. CNBC had this as well as others that there's a super PAC called Leading the Future. And the contributors to this are Andreessen Horowitz, OpenAI co founder Greg Brockman, Palantir co founder Joe Lonsdale and Angel founder SV angel founder Ron Conway and AI software company Perplexity. So these are people like pushing the super pack, which is all about acceleration. It's all about like rapidly accelerating what's going on. And they're basically saying that, that this stuff is ridiculous. So the, the Build America AI is, is in essence like led by this group and they're saying we cannot afford to pause AI. So this TechCrunch piece highlights the reason of the Pro Human AI declaration, the document you mentioned. The goals behind that effort are understandable. People want AI to be safe and they want clear rules. Those are fair concerns. But this is still the wrong direction. So this is the super PAC people pausing frontier AI development will not solve the problems that supporters claim it will solve. If anything, it risks making several of them worse. It would slow the research that helps us understand how these systems behave and practice and weaken Americans position at the exact moment our adversaries are investing heavily in advanced technology. We cannot hand hostile actors on the world stage or strategic edge. That is what would occur if we paused AI. And then that leads to the AI legislative framework from the government which is just the starting point. That's the most important thing to take away from that. Just like guidance on where they think legislation should go. It's not doing anything yet. But you covered some of. It's like protecting children, safeguarding and strengthening in American communities. Respecting intellectual property. Right, That's a really funny choice of words. Respecting intellectual property rights, meaning they don't want you to have property rights as a, as a creator and supporting creators, preventing censorship and protecting free speech, enabling innovation and ensuring American AI dominance. That's probably the most important one and because all the other ones fall under that one again educating Americans and developing an AI ready workforce, which I'm definitely intrigued to hear what they've got in mind there. So yeah, again it's just, I think what we're seeing, we've said this like recently every week there's now going to be more and more on the political side. We are moving into the midterms. We are moving into the moment where the political parties have to decide whether or not Americans care. And this election cycle is either going to be all about AI or it's just going to like Fade away. Yeah. And you're seeing the push towards data centers being bad, job loss being bad, and then you've got the lead in the future super pac. People who are like, all of it's great and it's all going to create an abundant future for all of us. And if you don't believe that, then believe we have to beat China. Like that. That's basically the messaging. You know, it's like, choose your fighter. Like that's, you know, I don't know where the middle ground is here, but right now neither side really knows. But the, the super pac, the leading, the future people are, they're going to push hard on this stuff and they're going to try and make you believe it's all going to work out and jobs aren't going to be lost. And you, you always have to. What I would just encourage people to do is like, don't get stuck in whatever your traditional political silos are. You know, if you only listen to one perspective on this, this is an issue where you can't just be listening to one perspective that you've always followed. I think it's really important to like realize neither political party knows the answer here. They're both trying to figure it out. And so it's really important that you open your own mind and like look behind who's saying things and what the goal they have behind saying that is or where their research is coming from. It's going to be very important to try and keep a level head on this stuff and be. Listen to arguments of both sides.
39:50
To your point, about people often being pulled who don't know what AI actually is. That's the point of some of these numbers we can throw out. We would throw out half these questions if we were doing actual research. But if they surface a strong opinion or view on AI, even if that view is wrong, that's really useful polling to certain people because it tells you exactly what you need to say and hit on using that ignorance almost as a weapon.
44:20
In some ways, yes. Facts and lies mean nothing in election cycles. It's all about what can you say that'll get you to remain in power. And that's, again, I don't, I don't think that's like a controversial perspective. Right, right. It is what it is. Like they're gonna tell you whatever you want to hear to stay in power or to get in power, both sides. So form your own opinion, like form your own informed understanding of the situation. And then from there you can take more logical actions to make sure you know, you understand. I don't know, it's like a situational awareness, I guess, about what's happening with this issue. It's going to become a major issue. I think, I think they're going to find that. They're going to find the, the levers to pull. They're going to find the wedges to create frustration and anxiety around AI and that could get very dicey.
44:51
Our third big topic this week is about Paul. This SmartRx annual meeting and retreat we had over the last couple days of last week with our team. So this was super inspiring. We spent a couple days together collaborating. Day one, we talked about vision, goals, KPI's priorities and growth initiatives. Day two, we ran AI productivity and AI innovation workshops which are designed to accelerate responsible AI adoption across business units and teams. And the reason we wanted to cover this and dive into it is because it has some signals, maybe some lessons here about overall company transformation with AI. Because Paul let you kind of unpack this for us because what we were able to achieve over just two days, both in how we were approaching AI and by actually using the technology, I think can teach us quite a bit about how AI is changing the way businesses operate.
45:46
Yeah, I mean, a couple of things and I, you know, Mike, you and I haven't talked about this like, so if you have other perspectives or things to add, let me know. But yeah, the reason we wanted to highlight this is a few things came to me. So it was like two days. There was a part of me that thought it was a great example of what you can do with the time you gain from AI. So the fact that we use AI so intelligently within our own business gives us a little freedom to say, yeah, let's take a full two days, like let's go do this thing, let's go, you know, think, let's go spend time together, build camaraderie, like do all the things we should be doing. And as I was sitting there, I kept thinking like, we got to do more of this. Like when I think what does an AI forward company look like and how do you take the benefits you gain from AI, the efficiency and productivity gains and redistribute that in some way. I'm not a four day work week guy. I don't think that's reality. I do love the idea though of like, let's do more of this. Let's have like once a month, let's just take like an afternoon, just think and talk and like work on big ideas. Like I find that to be more like, enables the work to be more fun and more fulfilling if it's not just let's ma. Let's token max every minute of every day. So I think there's like the, in some ways I want to build, I want to, I want to maximize what we can do, but I also want to make sure we're getting the benefits of it. It's not like a race to some end game or like some competitive race. So yeah, the way we set it up was day one was, as you mentioned, sort of the company day as vision goals, KPIs building scorecards, a rocks workshop, or setting priorities for the coming quarter. And then I think just like the thing we teach, which is setting expectations for everyone of what an AI forward professional looks like and in some ways modeling that by showing in real time how we're using AI and making sure everyone on the team understands the capability. So Mike, you did on day two, you let off with this AI productivity workshop and you talked about the idea of not only jobs as tasks, but tasks as workflows, which I loved that framing. And then you went through like an AI capabilities overview of like, what are all these things the models can do so that people started to think a little differently about their own daily lives at work. We demoed jobs GPT campaigns GPT and innovations GPT. I did that one in mine. But those are some of the free custom GPTs we've built that we make publicly available. We use them in our own teams. Like we literally use these tools to train our own teams. And as an example of like this AI Ford idea in real time. So Mike's doing his workshop, which was awesome because I've never sat through one of Mike's workshops. So I like, we, Mike and I do these things all the time for other companies. We do them at our make on event. But like, we don't have time to sit in each other's workshops. So he's doing this workshop and he's showing AI kind of layering over workflows and reimagining workflows. And he showed this AI capability slide and then he turned it into. It was like a spreadsheet with like
46:40
90 rows or something like 90 different capabilities and features across some of the major AI tools. So you can quickly like pick and choose and filter and map things to all the individual tasks you're doing as part of a workflow, for instance.
49:35
So like reasoning capabilities, video capabilities. So yeah, it's like. So I was like, I love this. And I was Looking at this thing, I'm like, I wonder if we could turn this into something. So as he's talking, I take the spreadsheet and I put it into Claude code, or just Claude. And I'm using Sonnet 4.6 at this point. And I said, help me visualize this. We want it to help professionals understand the full capabilities of today's leading AI models so they can apply them to their work. That was the entire prompt. So it did it. And I was like, this is really cool. And I said, is there a way to turn this into an app that I can demo internally? So, like, three minutes later, I had this functioning app. So Mike doesn't know this is happening. He's just on stage doing his thing. But the best part, and this, I'm still like, trying to wrap my head around this. Mike, we don't have a Claude license for the team. So when Mike built his capability slide, it was what Google.
49:48
So Google, Gemini, ChatGPT, NotebookLM, and I spun out kind of deep research for both tools as kind of its own capability set.
50:38
Right? Okay. So this 90 row worksheet does not have Claude in it, but I'm talking to Claude to build this interactive demo. So Claude says it first asked me a question. How would you want to share run it? And I said, standalone HTML files, fine. It then said, what should people be able to do beyond browsing? Select all that apply. And then I just. Simplicity. I was like, just browse. That's enough. Then this is the question that blew my mind. It said, should Claude be included as a fourth tool? So it was aware that it wasn't part of the spreadsheet he created. And it asked me if it should add itself to the spreadsheet. I literally laughed out loud when I saw this. I was like, what? And so I said, yes, add Claude. Great. And it did. And it followed the exact model he had done for the other. And then it built this interactive capability thing. I mean, it honestly blew my mind. And as I said, like, we do this stuff every day. I see this stuff every day. And there's still moments where I'm like, I can't even believe it was capable of doing this in real time. And so when I say Claude is like running circles around what some of these other apps are capable of doing, AI says capable of doing. This is a perfect example of it.
50:45
Yeah. You1 shotted a 90 item capability database, more than 90, because it added in probably 25 different things from Claude. One shotted it in a way that genuinely was professionally Designed, extremely intuitive. It was great.
51:58
It was finished. Functions, finish capabilities.
52:14
Yeah.
52:17
Unbelievable. So the other one I'll share and again, we'll touch on some of this kind of later on was Rocks. And I put this on LinkedIn on Sunday, and I actually featured this in my newsletter, the Executive Insider newsletter. So I'm just going to read what I wrote because it summarizes it really well. So I basically saying, like, we went through this retreat and like, one of the things became apparent to me as an example is this idea of Rocks. So we use a modified version of Rocks from the EOS system in which departments and individuals establish three to five priorities per quarter. And then the rocks allow us to align our time, energy, resources on what matter most, and it provides transparency. So if I want to see what are, like the five things Mike's working on in Q2, I can go, or I want to go see what the studio that Mike leads is doing, I can go see that. So the thing that became abundantly clear to me is the time to complete Rocks is compressing and that it requires a complete rethinking of business operating systems. So, for example, during a live session where I was actually demoing. So this is part of the company day, I was demoing a new AI assessment tool we're developing that I'll share more about in probably a month or two. And so I had used anthropic Claude code against Sonnet 4.6 in real time to build an interactive reporting dashboard that visualized and analyzed responses from 17 people. So I had built this assessment in Google forms as like an mvp. And then Mike and I tested it the day before retreat just to make sure it worked. And so I had my data and Mike's data, and then I had everybody else take it. And then I exported that CSV from Google Sheets. So that was it. That was like the entire process. You need zero coding, zero design abilities to do this thing. And I give this to Claude and I said, so this is while we were taking a lunch break, I ran this. So here's my prompt. I had 17 team members take the assessment. Can you come up with an elegant way to visualize the results based on the format model you already created? So I had to create one for me and Mike and the CSV is attached. So in a previous life, which I said, like, AKA three months ago, before cloud code really started working, this would have been my entire Q2 rock. Like, create an interactive dashboard to visualize assessment results for teams. I would have spent 10 to 20 hours researching dashboards and developing a brief. Then I would have invested time and money hiring a designer and developer to conceptualize, build, iterate on the design and capabilities. Then we would have gone through weeks of internal testing and revisions and then maybe by the end of Q2, I would have actually had a minimum viable product that I could demonstrate to the team and pilot with users. Instead, in about five minutes, while I got a plate of pasta, Claude did the entire thing with one prompt and the final product was beyond anything we could have possibly created. And I told Mike, I was like, I'm going to try this, I'm going to do it. And then he and I are both just like waiting, like, we should go check the laptop. Like, did it do it? Did it do it? It was insane. It was totally interactive, better than anything I could have possibly designed myself or work with a developer to build. And I'm now going to use that to actually turn it over to a developer and say, here, let's build this and like take this, you know, live in like 30 days hopefully. So, yeah, we share this as a little bit of like behind the scenes of how we think about Smartr X as like an AI native company, like event and media and education company. And two, just to like bring to life the fact that you don't need any coding ability to all of a sudden now just build stuff. And it's totally compressing the timelines to do everything in business. And it's changing the way every day that I think about how to run our own company and how to advise other people to build their companies.
52:18
I would argue we have quite well done and clear and ambitious rocks at least, you know, in, in our department that we were working on during this workshop. But yeah, it is actually kind of laughable that all five of them should take three months.
55:42
Yeah, I mean, I really think my guidance to the team was like, five. I want you to have like five for your department. I actually think you need 20. Like I. Because, Right, right. And you need like some categorical thing of like, hey, this is, you know, this would take 10 to 20 hours of human labor. We think we can do it in 10 minutes. Like there are honestly things that are just going to be like that. There's going to be all these like quick win rocks where it's like, well, it used to be three months worth of work, but it's probably three days now with mostly AI. It's like level three AI, like it's going to do most of the work. So the companies that figure that out and Realize that and restructure how they're building everything stand to do really well.
55:55
And just two final really quick notes here. But to piggyback on what you did with AI during these workshops. So the AI capabilities map I built, which was again like 90ish rows of all these different capabilities and features, that's a lot to figure out on your own. And what's really cool is I determined the framework I wanted to use and worked back and forth with Claude to say, okay, what's the most sensible way to organize these? Once I'm, you know, once we have them, I don't have them yet. And then it's like, okay, we've got a really solid system. How do I get them? Typically you might go do a bunch of research. You might have to sort through all sorts of documentation. I just went into each tool and screenshotted all my menu options and dropped them into Claude and said, guess what? We're going to go create the spreadsheet based on the framework that you and I came up with with and go have at it. And then it basically one shots a 90 row spreadsheet. It's incredible. And same type of thing. During your innovation workshop, I fed Claude a lot of different context about my department, the content studio, some of my context around our organization. And then using your framework that you developed, layered that on top of that context and what Claude is now able to do and got better innovation ideas than I could have come up with with first on my own at all and second in an entire day. I did it in like 20 minutes. So it's so powerful, not only just using the right tools, of course, but having these proven frameworks and models and ways of thinking layered over them. All that stuff we've spent lots of time developing as like IP or as a unique models to approach these things with in our workshops. It's, it's like rocket fuel at this stage.
56:33
Take. Yeah. And I think there's just something like a lesson to be taken from how we structured it because obviously our team's probably more informed than most teams about AI capabilities. But honestly, like, I don't know that they even were aware of a lot of the things these models could do.
58:18
Yeah.
58:33
And so it was very intentional how we did this. And I would advise other companies to think about a similar model where you have this kind of like state of AI, like what is it capable of? And that's often what I'll go in and do with, with enterprise, like I'll do like a state of AI for business. And here's the capabilities, here's what you need to understand. Then you do the productivity workshop where it's like, how do we get efficiency and productivity in our tasks and workflows? Then we'll often do a problem solving one too. But like the innovation one is how we closed. And I intentionally wanted to close with that because once you understand what it's capable of and once you've solved like the lower level efficiency and productivity things, now you open your mind to the possibilities. And I mean, and then we go around the room and each person gives us like one or two innovations they're super excited about. So then you leave after two days actually feeling like ready to go, not like drained. It's like, okay, that was amazing. I want to go do those things now. And that was what I got. People come up to me, it's like, okay, can we do these things that we just talked about? So I think it's a really cool format for people. And so if you're trying to get your team on board, you know, borrow that format of like, make sure they're understanding of it. And if you need help with it, give us a call. This is like what Mike and I do all the time. We run boot camps and workshops. And so if nothing else, we can advise you on, you know, ways to do it. But if you know you're a big enterprise and you need help with it, just like, you know, we can come in and do stuff like that too.
58:34
All right, Paul, before we dive into Rapid Fire, quick message here. This episode is also brought to you this week by our upcoming webinar which is unveiling our AI for CMOS Blueprint presented by Google Cloud. Now this is actually happening the week you are listening to this episode. Thursday, March 26th at 12:00pm Eastern, 9:00am Pacific. And in this session, me and our CMO Kathy McPhillips are going to break down the insights from this AI for CMOS blueprint. We put together in partnership with Google where we break down real world State of AI for CMOs use cases, tools, strategies and more. We'll also be doing some in depth discussion and live Q and A. So registration is free. All registrants will receive ungated access to the full AI for CMOS Blue Blueprint. So go to SmarterX AI forward slash webinars to go register. All right, let's dive into Rapid fire, Paul. So first up, Microsoft CEO Satya Nadella is taking some more direct control of the company's Copilot product, personally overseeing a restructuring that consolidates consumer and commercial copilot into a single organization. Jacob Andrew, a former Snap SVP who joined Microsoft last year, now reports directly to Nadella as the new evp, leading Copilot experience across both segments. The restructuring frees up Mustafa Suleiman, the DeepMind co founder who became CEO of Microsoft AI in 2024, to focus entirely on what he calls the company's super intelligence efforts. This move apparently comes as Copilot trails quite badly in the AI assistant race. CoPilot has 6 million daily active users compared to ChatGPT's 440 million. That is, according to a CNBC article. Gemini has 82 million. Claude has 9 million. Nadella wrote to employees that Microsoft is doubling down on our super intelligence mission with the talent and compute to build models that have real product impact. So, Paul, what does this tell you about where Microsoft is headed with AI? Like, reading this, I was like, I know it sounds like Mustafa is excited, but this feels more like he's getting sidelined and we need to get real serious about Copilot real quick. Which is kind of of what we've heard anecdotally from users of Copilot.
59:53
Yeah, there's lots of variables going on here. I mean, one is the, the shift in their relationship with OpenAI. You know, they're, they, they were obviously a major investor in OpenAI. They're a major equity holder. I think it's somewhere around like 27%. They, they own OpenAI, but all of their efforts were being built on top of OpenAI's models. And now, again, if you go look at what we were just talking about with Claude, it's like, like you're almost at a disadvantage as an organization if you can't use. When a breakthrough happens, when somebody builds just a better thing, you're at a disadvantage if you can't use that thing. And so if Microsoft was stuck using OpenAI technology and all of a sudden Claude races ahead in some really important component, that's not great. And then if you're Microsoft and you're one of the three biggest companies in the world, the fact that you aren't building your own models is probably a disadvantage moving forward. And so I think there was this shift where they realized that probably a year and a half, two years ago that they were going to have to remove their reliance on OpenAI. It probably happened the day Sam got fired, when that became like, oh, boy, we are all eggs in one basket and it could go bad real fast. So I think there's been this ongoing shift where they knew they needed to invest in their own technology, build their own models. They need to have kind of an off ramp over time from their reliance on OpenAI. And then in November of last year, they announced this humanist superintelligence movement. So we talk about an episode 179, which was on November 11. But Mustafa had tweeted, it shouldn't be controversial to say AI should always remain in human control, that we humans should remain at the top of the food chain. That means we need to start getting serious about guardrails now before superintelligence is too advanced for us to impose them. And then there was, you know, linking to a article from November 6th that was called Towards Humanist Superintelligence, where he said, at Microsoft, AI, we're working towards humanist superintelligence, incredibly advanced AI capabilities that always work for in service of people and humanity more generally. So again, we've kind of known this was happening at that time. I think I pulled what I said. I said, maybe Mustafa stays at Microsoft to realize this vision. But I can't help but feel like this vision will eventually clash with the need to justify their investments in AI. And so I think what they're basically saying is, you go focus on this stuff, you know, focus on the future and the building of this thing. But Copilot is critical to our business right now and it is not where we want it to be. And that now needs to get much closer to Satya. And that's basically, I think, what has happened here. I have no idea if Mustafa says and keeps doing what he's doing, if they really do believe in this humanist super intelligence thing, but that I don't see, like Wall street loving the human is super intelligent. I don't think stock price is going up because of that blog post or that vision. They want to. They want to know how you're going to compete with Claude and work with Anthropic, and that's all that Wall Street's going to care about. And at the end of the day, Satya and Microsoft have a fiduciary responsibility to return shareholder value. And I don't think that messaging plays. So I don't know, we'll see. It fits into that whole thing. I started off with these AI labs are shifting focus and you're going to see a lot of reorgs, a lot of like, you know, they tried something, it didn't work. Like meta burned 10 billion on the metaverse and changed their name to be Meta. And it's done, like, so just there's going to be lots of big efforts, big misses, and you got to move quick when it doesn't work. And I think this is an example of that.
1:02:10
Not to mention anyone who's a Wall street analyst of any type is almost certainly using Microsoft Excel and thus Copilot and sees it firsthand.
1:05:44
Or they used Claude in Excel and realized it was better than Microsoft.
1:05:54
That's what I'm saying, right? They have a very close experience with perhaps some of the inadequacies of this tool. All right, next up, an AI agent inside META took unauthorized action last week that triggered an actual security breach at the company. So an employee, an in house agentic AI to analyze a colleague's question on an internal forum, so pointed the AI at the question, said analyze this for me. The agent then posted a response to the question on its own without being directed to do so. The second employee followed the agent's advice, sparking a domino effect that gave some engineers access to meta systems they should not have been able to see. The security breach was active for two hours before it was contained. A Meta representative confirmed the incident and said no user data was mishandled, though the company's internal report noted unspecified additional issues that contributed to the breach. A source told the information there was no evidence anyone exploited the unauthorized access or that data was made public, though the reporting notes that may have been the result of dumb luck more than anything else. The agent had also passed every identity check in meta's system. That exposes some pretty serious fundamental gaps might talk about in enterprise identity and access management. So Paul, I'm curious, how close are most companies to having this kind of thing happen to them?
1:05:58
I don't know, but it's certainly a very viable thing. This is why I said in recent episodes you got, you gotta listen to it. I mean there's a, there's a reason why some enterprises are moving really slow, especially when it comes to adoption of agents. And like, you know, we talked about the Jensen thing where he was like open clause, like the chatgpt. I was like, okay, maybe, but you know how hard it's going to be in enterprises to do anything close to what that does. This is the exact issue. And we just Talk on episode 203 about something similar happening with Amazon where it just like went rogue and started doing everything. And I think I joked at the time like we could just do a rogue AI agent episode segment every week. Like it's, this is going to be a recurring theme and it's going to become a major issue. The Concerns around oversight and governance of these agents and then these agent swarms that are just given access to stuff and the breakdown you might then see in permissions controls. We had this conversation at our own company meeting. It's like, can we connect this to that? Can we connect that to this? And it's like, no, because I don't know yet the risks associated with that. So yeah, that's again, it's one of these situations where the tech can do things, but it doesn't mean you should let the tech do things because there's so many potential risks. So yeah, I, I mean this is a, this is a crazy one. You should go like read the articles about it. It's pretty nuts.
1:07:23
Yeah. I must found this was like more notable because it happened. It wasn't like some super incredible agent just like giving access to it, to your whole code base or whatever. It was just like a totally unintended consequence of something that's actually like probably a pretty normal use case on the surface, saying, hey, let me use AI to analyze a question one of my colleagues posted on a forum. And then you're like, oh no, I realize that now this thing can choose what to do and how to do it. And that's like totally a weird way to start thinking here, right?
1:08:41
Yep. Yeah. And again, go listen to Andres Karpathi no Priors podcast episode and you'll understand this stuff at a deeper level. He talks a lot about these risks and even himself not knowing. He talked about like setting it up to run his house and he's oh yeah, I gave it to access to. I was like, go find Sonos. And it like goes into his network and finds the Sono speakers and then it gives this security and he just gave it access to everything. And he like Dobby the Elf, he calls it. Yeah, it's hilarious. So again, this is recurring theme, really important to understand where agents are going, where these agent swarms are going, how they'll eventually be used to run organizations. And now some people are willing to be out on the edges right now setting these things up and connecting them to their own company data. And we're all going to learn plenty of lessons from their early efforts.
1:09:14
All right, in our next rapid fire topic this week, the Anthropic versus Pentagon saga continues. The Department of War is fired back at Anthropic's lawsuits. In a 40 page filing in California federal court, the Pentagon calls Anthropic a quote, unacceptable risk to national security, arguing the company might attempt to disable its technology or preemptively alter the behavior of its model during war fighting operations if its corporate red lines are being crossed. Recently now, nearly 150 retired federal and state judges appointed by both Republicans and Democrats have also filed their own amicus brief supporting Anthropic. We talked last week about how tech companies like Microsoft and Apple are all filing their, have all filed their own briefs basically arguing that this designation of Anthropic as a supply chain risk could mean the entire government procurement system becomes contingent on political favor rather than than the rule of law. So Paul, the big piece here is really this idea that a bunch of ex judges are coming out and saying that they also support Anthropic in this. We have talked about if this is going to get resolved anytime soon. There's a hearing on whether or not to grant Anthropic some temporary relief that's actually set for March 24, the date this comes out. Where do we stand with this?
1:10:01
I don't know. The only context, Mike, I'll add, is I, I think it's just still this, like he said, she said thing. Like where the government's saying one thing and they're doing the other thing, like behind the scenes, but they're trying to like, you know, give this perception that they're, they're all on the right here and Anthropic is this horrible company and it's this huge risk. So there was a tweet thread from Roger Parloff, who's a senior editor at Lawfare, and he, I'll put the link in. He said some anthropic updates. On March 4, just hours before Hegseth declared anthropic a supply chain risk allegedly due to threats of sabotage and data exfiltration, his undersecretary wrote Anthropic and he have the screenshot of the email that they were very close to a deal asking to change a prepositional phrase. So while Hexit's getting ready to like go on and blast them on AX and say they're done, they're actually still negotiating behind the scenes and they have screenshots of it then. Since then, the government has claimed that Anthropic sought a veto over Department of Defense actions. But two top Anthropic officials assert it never did. And this is actually like legally, they submitted a briefing saying this is not what happened. Similarly, government's purported fear that Anthropic might disrupt the military was never raised with the company and is a technical impossibility. So they actually explained, like, we can't even do the thing they're claiming we would do. And then as for Anthropic's refusal to allow its product to be used for autonomous lethal warfare and mass surveillance, Hegseth himself said those concerns were understandable. And the commander of the US CENTCOM echoed those sentiments. Anthropic's head of policy rights. So they submitted these briefs saying, like they agreed with us, like we weren't even like raising something that they didn't themselves think was an issue. And then he had one last update in government's response Tuesday. It backed away from the secondary boycott Hegston Smith called for in his February 27, quote, Final decision post on X, admitting it was lawless but also taking no responsibility for its devastating impact. The hearing is coming up on March 24th. So yeah, so these are declarations, legal like declarations from Anthropic's head of policy, Sarah Heck, submitted as part of their response to the case. And then they also their head of public sector. So yeah, I mean they're basically saying like here I'll testify to, like this never happened or this is what they said. So the whole thing, as I've said many, many times, it's become this political thing. It's become a battle of egos on the government side. And you know, I think that everyone sort of sees through that, why they're actually doing this and we'll, you know, see what the courts have to say.
1:11:24
I guess it's impossible to tell, but based on that new context, it almost sounds like there's one possibility. Where did like Hagseth jump the gun on tweeting about this when they were nearing the deal and it. And right.
1:13:56
So, well, I jump the gun but like claim some power that they actually don't have. Like.
1:14:10
Right, right. Like he's posting so aggressively when the deal's almost done before this all blows up and now it's just like doubling down on a mistake maybe. I don't know.
1:14:15
Right. Or just you're just going to do harm either way, so you don't really care if it's legal or not. Doesn't like that's the repercussions. Like nothing's going to happen to me if I do this and say this, other than hurt that company and try and use it as leverage to get them to do what I want them to do, which is not be a unusual political tactic.
1:14:24
All right, Next up, Google DeepMind has published a cognitive framework this week that attempts to answer the question, if AI actually achieved AGI, how would anyone know? So the Team here proposes a cognitive taxonomy with what they claim are 10 measurable traits of general intelligence on which to measure AI and its progress towards AGI. And it's divided into two categories. So this first category covers eight building blocks of human cognition. Perception generation, attention, learning, memory, metacognition and executive functions. And these combine to form two composite faculties that DeepMind considers equally important, which are problem solving and social cognitions. They basically define these as the ability to process and interpret social information and respond appropriately in social situations. So their proposed test here is pretty straightforward. They want to run AI models and humans through the same cognitive benchmarks, and then they theorize you'd get a measurable estimate of when a single AI can meet or exceed human capabilities across all 10 of these areas. DeepMind actually launched a Kaggle hackathon with a $200,000 prize pool to crowdsource evaluations for the five areas where the gap between testing capabilities right now is the largest, which are learning, metacognition, attention, executive functions, and social cognition. So they say their goal is to move the conversation around AGI from one of subjective claims and speculation towards a grounded, measurable scientific endeavor. So, Paul, does this change anything about how we talk about AGI? Are we getting any closer to really defining what it is and actually measuring it?
1:14:41
No. I mean, Google's DeepMind's done the best job of trying to get to that point. You know, they had a paper last year, Shane Leg Led, on where he was trying to sort of define the different, you know, general capabilities and performance and trying to put some way to measure it. So I like the effort to try and quantify it, making it more meaningful, try and get some, maybe eventually universal agreement on what it is. The first thing I thought when I saw this is like, well, how do you not saturate these tests? Like when the models eventually learn what the tests are and just like be able. I don't know how they would do that to keep them like sandbox of the model doesn't end up the training data, basically that it eventually learns how to look like it has AGI because it just learned what the test was ahead of time. But I think the most important thing for our audience is that we just keep coming back to this. AGI is a really interesting topic. It's fascinating to sort of follow along, progress towards it. It's a meaningless term related to what it does to impact your job, your company, the economy more broadly. So we don't need to reach AGI, whatever that definition is. We don't need to agree on a definition for AI to transform businesses, the economy and society. This idea of capabilities overhang. We talk about that Andres Karpathy episode that I mentioned and touched on this quite a bit. But just go back to that example I shared of Rocks. Like, if you have a company like ours that knows this stuff, we understand what AI capabilities are and we look at operating system of our company and we're like, oh, we're just going to reimagine the whole thing. Rather than five rocks a quarter, we think we do 15 or 20, like easily. And here's how we're going to do it. So we understand the capabilities and we're applying them to the best of our ability. Then take some other company that doesn't even have Genai tools for their team yet. They haven't even got them copilot licenses or chatgpt licenses. Like. Like, they've done no personalized training. They've never run a workshop internally. Like, they're not even taking advantage of any of the capabilities other than maybe using it as like an answer engine or a chatbot. So there's this overhang of we have all these capabilities and so few companies are actually doing anything with them. Not just companies, educational institutions, governments, practitioners at an individual level. So that to me is the most important thing. So I'm all for this. I think, like, quantifying it so we can just get to the point where we agree on what it is. Makes total sense. But don't be misled by that or wait around for that definition. Be like, oh, okay, I'll worry about it when we get closer to AGI. It's already there.
1:16:18
All right, next up, Anthropic has published results from the largest multilingual qualitative study ever performed on AI attitudes. They did nearly 81,000 interviews with Claude users across 159 countries and 70 languages. So these conversations were actually conducted by Anthropic Interviewer, a variant of claude, trained specifically to conduct and then analyze interviews, which we have talked about in past episodes. Interestingly, they found out that the top fear of people expressed in these conversations is actually hallucinations. Slash unreliability of AI, which ranks as the number one concern with 26.7% of people mentioning it. It is ahead of jobs and economic impact, which is at 22.3% and loss of human autonomy and agency at 21.9%. Interestingly, Anthropic finds that people value AI often for the same capabilities that they fear most. So 50% of respondents experience time savings from AI. Yet 19% felt pressured to simply work faster. As a result, 33% cited learning benefits, while 17% worried that it would actually facilitate more cognitive decline when you're relying on machines to think for you. And it's interestingly that people experiencing one side of attention are typically three times more likely to also worry about the other side. Meaning these are kind of inherent contradictions in the same people using AI. Now what's really cool is they actually asked what people actually want from AI. 18.8% of those who answered said that they seek first professional excellence from AI. 13.7% said they were seeking personal transformation, 13.5% said better life management, and 81% report experiencing some progress towards their vision in those areas. Paul, I'm interested what you took away from this data. Pretty interesting way they went about getting it.
1:18:40
Yeah, it's the approach to research that I found most intriguing. So I mean the data is great. I do think again, as I referenced earlier, you got to keep in mind like who, who are the people responding to these questions, things like that when you look at the data. So you're not just like, you're not making some broad assumptions. In December 2025, before Claude, Claude code really took off and before the government issues and before like this movement to where the Claude app became like the number one app on the app Store, they have a heavy technical user base, lots of like coders, lots of AI researchers using Claude. So when you're looking at this, Even though it's 80,000 people across all these countries, it's still likely skewed toward a more technical user. So just for reference sake, that that's important to keep in the back of your mind. So I love the approach, this like dynamic approach based on responses that adapts it. Not great news for people who run focus groups and who are like consumer research people for a living. This is definitely one of those ones where you're either adapting or the whole whole new way of doing research is going to kind of run you over. They said their next anthropic interviewer study launching shortly to a small subset of Claude users, focuses on Claude's effects on people's well being over time, whether Claude is actually making people's lives better in the ways they want and how it could do so more effectively, which I thought was interesting. And then they said this is a new form of social science that is a qualitative research at a massive scale and we're in the early stages of learning how to do it. Surveys and usage analysis Tell us what people are doing with AI. But the open ended interview format helps us get at a why. Conducting this research has moved us and challenged us. We did not expect so many deep, open and thoughtful responses. By far the most common reflection from our team was that this, it was viscerally moving to see Claude impacting people's lives for the better and equally motivating to hear their concerns. We were equally gripped by the fears and downsides. People saying that the same availability making Claude useful is what makes it hard to put down or knowledge workers worrying about outrunning AI's economic impact. When you come into contact with this much raw human experience, it knocks you sideways. They say the usefulness is real and the question for all of us is how to claim the benefits without incurring undue costs. I thought that was really interesting to note, Mike, because this actually came up during our company retreat. This idea that we're all sort of at the frontiers of figuring all this out and using it and it's awesome for productivity and innovation and efficiency and growth and all these things. But it also has this like very messy, complicated other side where it has this human impact and, and maybe your friends or your family hate it and they don't even like the fact that you're working on it and they, they have these perceptions about what you're doing because you're in AI or because you're one of the people who talks about it. I honestly think about that sometimes from, you know, what we do on the podcast, Mike, where I think about like, God, I hope at some point, like people don't, like we're trying to do the human center approach. Like we're trying to educate people so we can have a positive outcome.
1:20:37
Right.
1:23:34
But sometimes the truth doesn't matter. And like I do worry about, about that. It's part of the reason I don't read comments on social ever. Like, I don't, I don't look at our comments and YouTube and AX and maybe sometimes LinkedIn. But I just prefer to like try and just do our thing and like no, we're trying to do a positive thing. But that doesn't, it doesn't, it doesn't change the fact that there's like darkness to this and there's uncertainty and fear and anxiety and hatred and like all those things are very real. So I'm really excited actually that Anthropic's going this research direction.
1:23:34
Yeah, that's why I really actually like the findings here. Obviously to your point, they are skewed towards A certain type of people. But yeah, when someone asked at our off site, like, how do you stay grounded when you're dealing with such heavy and sometimes horrible dark AI topics in the news? That was like my answer was that focusing not to the detriment of the negative, but focusing on the positive things that I've been able to do with these tools, like, I've been able to do things, achieve goals, get results that I never dreamed possible and like, like it is genuinely as technology has made me a better professional leader, thinker, strategist, even husband and father. So that's kind of the flip side. So I love to see in this data people saying, hey, I'm using this. I'm trying to get out of AI professional excellence or personal transformation or better life management. I've done all those things with AI and it is glorious.
1:24:07
What you're able to do doesn't, yeah, it doesn't get rid of the negative stuff or like the concerns, but it's that, yeah, it's trying to focus on the positive. So yeah, this something we're going to come back to too. We actually. So, you know, we brought on a director of research a few months back and it's one of the focus areas she has is actually on the humanities side of this. It's like to, you know, so we're actively, Mike and I and Taylor are actively talking about more research in these directions and the kinds of things around the human impact. And I. So yeah, it's something we're going to probably be doing a lot more about on the show and then even with our academy is starting to talk about that stuff. So, so very important and maybe even on our event side we might be looking at doing some stuff. We can bring people together to have these conversations because they're critically important. Okay, well, as we wind down, Mike, we had mentioned at the start the AI for Professional Services, which you taught as part of our academy. And so again, one of the ideas we have is to do little spotlights on these where we, you know, without having even to take it, we give you a little bit of insights into some of the key things we learned in building these courses. So Mike, with AI for Professional Services, any like key insights or takeaways that you think would be helpful for people to hear?
1:24:57
Yeah, sure, Paul. So as part of this, you know, four course series which comes with its own certification, you know, we're breaking down both from a high level, what is happening at the industry level that you need to know about and then getting into the actual Tactical A to Z of here's how you identify your own use cases and match AI tools to them in your own professional services career. So a couple things that just jumped out as part of both building this course and as someone that was in professional services before we did the whole AI thing is number one, and we've talked about this on the podcast. One of these trends that really, really, really needs to be appreciated is the idea that the billable hour model is maybe not only on borrowed time, but is dead. Like if you are on a billable hour model as a professional services organization, AI is a major threat to that because many, many organizations still have not adequately figured out what happens when you can now do things in a fraction of the time that you used to do them. In using AI, you cannot simply charge the same amount of hours and hope to get away with it. So you see a lot of industry professionals and leaders trying to figure out how do we adapt our business model without tanking our entire organization. So one of the big takeaways there is the firms that are going to win are the ones that figure out sustainable, defensible value based pricing first. So pricing on outcomes, not hours. Because again, you can do so much more in the same amount of time. There's no chance your clients, your customers are not going to demand that you pass along those savings to them. And then I would also say another big area here is figuring out how your human intelligence within your professional services firm becomes your superpower and your competitive advantage. Because unfortunately for a lot of professional services firms, there are very intelligent AI models out there that now have been, for better or for worse, trained on a lot of your expertise. So figuring out how your humans, with all their experience and background and domain expertise, can actually leverage, be leveraged and scaled with AI is going to be the entire battle moving forward. So you really want to look almost at that. Any frameworks, any experience you have internally as almost like your own IP if you're not already, because AI can scale that and you can have that be a competitive advantage. But if you do not do that, if you are playing at the commodity level of hey, we're experts in marketing, like, so is AI now. So you have to figure out what kind of expert you are and how you are differentiated. And then last but not least, I would say there's always these questions in professional services about, like, we'd love to get started with AI, but we work in really sensitive industries with clients that have privacy and data concerns about using this stuff. We have not figured that out yet totally valid. We talk about that more at length in this course series, but the advice here is actually start with your back office stuff. If you have these kinds of challenges, if you are still trying to navigate data and privacy concerns, your back office stuff, I guarantee you can become dramatically more productive by applying AI, often at a very low hanging fruit type of level. We go into very specific use cases and tools in the core series to help you do that. But there are these areas where that don't touch client facing stuff that you can actually start your AI journey almost in the back office and achieve massive immediate profitability gains just from doing that alone. So tons, tons more in the core series, Paul, but those are kind of three big takeaways there.
1:26:01
Yeah. The other thing I think about Mike, is just from the buyer perspective, understanding the professional services and how it's evolving and how I should be looking for like AI forward professional services firms. So even for me as the CEO, we outsource legal, it, accounting, advertising. Like we work with an advertising part. So I just think about just those four, like understanding how their business models are evolving and the importance of working with AI forward versions of those companies and the points of contact, things like that. So yeah, it's great and I appreciate you obviously building the series and this ongoing effort we're doing to try and sort of create content across all the departments, all the relevant industries and then even into like businesses try and make that stuff super relevant for people. So hopefully like these little spotlights will be helpful for people to get a little taste of what's going on in these different industries. We'll touch on departments, we'll touch on some of the gen app things we're doing and just try and bring some of that value from Academy to the podcast each week, week.
1:29:53
All right Paul, we've got a number of AI product and funding updates here to wrap up this week. So I'm going to run through these. If there's anything that jumps out to talk about further, let's do it, but otherwise I'm going to run through these. So first up, Jeff Bezos is trying to raise a hundred billion dollar fund focused specifically on AI manufacturing. This fund would represent one of the largest single pools of capital ever assembled around AI infrastructure. Google has launched something called Stitch, an AI design tool that turns natural language prompts into high fidelity UI design. So tool lets you describe what you want in plain English and generate production quality design outputs. So Google is kind of in this emerging quote unquote vibe design category. Google also rebuilt AI Studio from scratch as a full stack vibe coding platform. They said they spent actually four months on this rebuild, and the new version lets developers go from prompt to working application entirely within AI Studio, OpenAI has released smaller, cheaper tiers of GPT 5.4, so GPT 5.4 Mini and Nano give developers access to the model family at lower cost and latency. In some other legal news, a court temporarily allowed Perplexity's AI shopping agents to continue operating on Amazon. Perplexity's agents browse Amazon on behalf of users to actually find and purchase products, and this ruling lets the service remain live while their ongoing legal dispute, which covered on a past episode with Amazon, plays out on X. The company is rolling out AI generated article summaries that appear when users share links on the platform. Researcher Ethan Pollig noted the irony that many of the articles being summarized are themselves obviously AI generated. So we're creating an interesting loop where
1:30:47
AI summarizes AI and then it trains the GROK language model and then it trains it's part of the reason why they made articles such a prominent feature is to get a lot more training data that was proprietary to them potentially. Yep.
1:32:35
And finally, we'll be keeping a close eye on this one. Demis Hassabis, CEO of Google DeepMind Nobel Prize winner, is teasing his upcoming book called the Infinity Machine. Set for release on March 31st. It covers the story of DeepMind and Hasabis's vision for the future of AI. I'll be looking very closely at that one, Paul.
1:32:49
That looks interesting. This one. I did pre order this one. This is a good way to end today's podcast. I'm actually going to read the excerpt because I think this is really fascinating. So, so this comes from the. What was it called? The Infinity Machine. The true reason to build artificial intelligence, Hassabas was now saying, went beyond Kant and Feynman. The goal was to draw closer to what might be called God, to the intelligence that may presumably have designed everything around us. Hassabas quote, I am first and foremost a scientist. My goal is to understand nature. But doing science is sort of like reading the mind of God. Understanding the deep mystery of the universe is my religion. Kind of of we humans. We have these faculties. The world is understandable, but why should it be that way? I think there is a reason computers are just bits of sand and copper, hassabas continued, now sounding more urgent. Why should these combine to do anything? I mean, it's absurd. The electrons move around and then that creates an AI system that can defeat a Go, master. Why should that be possible? This table, Sebastian Hassabas wrapped his palm on it for emphasis. Why should it be solid? This is beyond evolutionary coincidence. We can build electron microscopes and interrogate reality down to the most minute detail. We can build systems that detect black holes colliding more than a billion years ago. I mean, what is this? What the hell is going on here? There was a pause, but Hasabis was not yet finished. I sit at my desk at 2am and I feel like reality is staring at me, screaming at me, me, literally screaming at me, trying to tell me something. If I could just listen hard enough. That's how I feel every day. So you can see why I'm trying to build AI. I've felt that since I was very young, that there's a deep, deep mystery about what's going on here. You can frame it how you want, you can call this God's design, or you can say it's just nature. I'm open minded about the description and I don't know what the answers return will turn out to be, but at the moment we don't really know what time is or gravity is or any of these things. So there's a mystery waiting to be solved and it encompasses just about everything I would like to understand before I croak. I would like to understand and then I'm perfectly fine to shuffle off my mortal coil. That's awesome. Incredible. Yeah, so that's. And again, as we've said on the show many times, like Demis thinks very deeply about Elon actually commented on that one. He's like, I like. I share demos's like urgency here and thoughts here. So, so I think it's important to understand why one of the people, one of the five, why he's building AI and it is for a much bigger solve intelligence and then solve everything else. That's been his mission, you know, for the last 30 plus years of his life. 40 plus years. Life.
1:33:07
Incredible. All right, Paul, just one quick note here as we wrap up go to SmarterX AI forward slash pulse to take this week's survey. We're going to ask a couple questions about the topics this week. One is about OpenAI's enterprise deployment with that private equity backing we discussed. The second one is about anthropic study and some of the findings there and how you feel about them. So we'd love to hear from you and Paul, really, really appreciate you breaking down everything for us this week.
1:35:55
Yeah, good stuff. Busy week as always. I think we just have one episode this week?
1:36:20
I think so.
1:36:23
I don't know. I didn't check my calendar yet this week. Maybe we have a second one, but we'll be back next week and then I think I'll be on spring break then for like 10 days. So yeah, next week might be. We might be on a break after next week week. So yeah, thanks for being with us. Have a great week everyone, and we'll be back with you next week. Thanks for listening to the Artificial intelligence show. Visit SmarterX AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in person events, taken online AI courses, and earned professional certificates from our AI Academy and engaged in the SmartRx Slack community. Until next time, stay curious and explore AI.
1:36:24