The Artificial Intelligence Show

#207: OpenAI vs. Anthropic Feud, Claude Mythos Leak, Brutally Honest CEOs & Data Center Moratorium

90 min
Mar 31, 202619 days ago
Listen to Episode
Summary

The episode explores the deep personal and professional rivalry between OpenAI and Anthropic, tracing tensions back to a 2016 San Francisco group house and examining how this feud is shaping AI development. The hosts discuss upcoming powerful AI models from both companies, including Anthropic's leaked 'Claude Mythos' model with advanced cybersecurity capabilities, and analyze brutally honest CEO comments about AI's impact on jobs.

Insights
  • The OpenAI-Anthropic rivalry stems from personal conflicts over credit, management styles, and philosophical differences about AI safety versus acceleration
  • AI companies are increasingly making brutally honest admissions about job displacement, with estimates of 70-80% of human work being replaceable
  • The concentration of AI power in just 5 frontier labs (Google DeepMind, OpenAI, Anthropic, Meta, xAI) creates unprecedented influence over economy and geopolitics
  • Political alignment is becoming a critical factor for AI companies, affecting government contracts and regulatory treatment
  • Enterprise adoption is accelerating despite security risks, with companies increasingly requiring AI literacy as a condition of employment
Trends
AI model capabilities advancing rapidly across reasoning, coding, and cybersecurity dimensionsShift from AI as productivity tool to AI as organizational replacement (Level 5 AI)Political polarization affecting AI company prospects and government contractsEnterprise migration from OpenAI to Anthropic for business applicationsCybersecurity vulnerabilities scaling with AI agent adoptionCEO transparency increasing about AI's job displacement impactData center construction becoming politically contentious issueAI-first hiring and performance evaluation becoming standard practiceSupply chain attacks targeting AI development infrastructureConsolidation pressure on mid-tier AI companies like Perplexity
Companies
OpenAI
Central focus of rivalry discussion, developing SPUD model and refocusing business strategy
Anthropic
Key rival to OpenAI, accidentally leaked Claude Mythos model details and facing government restrictions
Google
Parent of DeepMind, maintaining political neutrality and setting quantum cryptography deadlines
Meta
Tier-2 AI lab potentially acquiring OpenClaw, developing brain encoding technology
Tesla
Former employer of Andrej Karpathy who flagged major AI security vulnerability
Uber
CEO made brutally honest comments about AI replacing 70-80% of human work
PwC
CEO stated employees who don't embrace AI won't remain employed long
Apple
Planning to open Siri to rival AI assistants and end ChatGPT exclusivity
Microsoft
Suspending new hiring in Azure and sales divisions due to cost concerns
SpaceX
Preparing IPO that could raise over $75 billion, includes xAI as part of offering
People
Dario Amadei
Central figure in OpenAI rivalry, left to found Anthropic after conflicts with leadership
Sam Altman
Key figure in rivalry, made conflicting promises to executives and accused Amades of plotting
Greg Brockman
Recurring source of friction, major Trump donor, blocked from language model projects
Daniela Amadei
Co-led language model project, offered to step down rather than work with Brockman
Elon Musk
OpenAI co-founder who left after ordering layoffs, now suing the company
Dara Khosrowshahi
Made brutally honest comments about AI replacing 70-80% of human work within a decade
Paul Griggs
Stated employees who think they can opt out of AI won't be employed long
Mark Zuckerberg
Building personal AI agent, likely acquiring OpenClaw for billions
Andrej Karpathy
Flagged major security vulnerability in widely-used AI software package
Peter Steinberger
Likely selling company to Meta for billions based on recent podcast comments
Quotes
"Is just this like completely wild, unknown world we're heading into where basically these five companies are going to decide everything when it comes to the economy, business, geopolitics."
Paul Raitzer
"I don't think anyone gets a free pass here. Anyone, an employee who thinks they can opt out of AI is not going to be here that long."
Paul Griggs
"I don't know what you're going to hire those people for. When the layer above can do all the tactical work by just simply prompting a system."
Paul Raitzer
"Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government."
Judge Rita Lynn
Full Transcript
2 Speakers
Speaker A

Is just this like completely wild, unknown world we're heading into where basically these five companies are going to decide everything when it comes to the economy, business, geopolitics. Welcome to the Artificial Intelligence show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Raitzer. I'm the founder and CEO of SmartRx and marketing AI institute and I'm your host. Each week I'm joined by my Co Host and SmarterX Chief, Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us as we accelerate AI literacy for all. Welcome to episode 207 of the Artificial Intelligence Show. I'm your host, Paul Raitzer along with my co host, Mike Kaput. We're recording on Monday, March 30, 2026, right before 10:00am Eastern Time. I don't know if we're getting new models this week, but there's a lot of just chatter going on about what's coming up from all the labs, Mike. So it is, I would say this episode we're going to be setting the stage for, I think, what's going to be a pretty busy spring. And in some ways we might see some pretty rapid advancements, I would say, from the models. And these labs are pushing out a lot of stuff. So we're going to try and provide the context of what's going on and help people sort of frame it into what it means for what they've got going on in their careers and their businesses. And it's just, yeah, try and connect some dots. There's a, there's a lot happening. And as we were getting ready for the show, even just like two minutes where we came on, Mike and I were like, oh, wait a second, didn't this happen in 2024? So we're going to do our best to provide a little historical context into what's happening. All right, so this episode is brought to us by AI Academy by SmartRx, which helps individuals and businesses accelerate their AI literacy and transformation through personalized learning journeys and an AI powered learning platform. New educational content is added weekly so you always stay up to date with the latest AI trends and technologies. The AI for Departments collection features five course series and certificates designed to jumpstart AI understanding and adoption. We have AI for Marketing, AI for Sales, AI for Customer Success, AI for HR and AI for Finance. And Mike is wrapping up AI for operations this week. Right. So that one's going to be coming.

0:00

Speaker B

Fingers crossed.

2:34

Speaker A

Yeah. All right, so we've got five already ready to go. If you join AI Academy or if you're already a member, those are all in there already and operations is coming soon. So tell your peers in your organization if they're trying to figure this out, there's a department series for them. So these series are ideal launchpad for organizations that want to level up their teams and accelerate AI adoption and impact. Mike teaches the AI for Sales series and is going to be sharing some insights towards the end of today's episode. Some takeaways from that series. So individual and business account plans are available now. Or you can buy single courses and series for one time fees. Visit Academy SmartRx AI to learn more. All right, and then each episode, if you're new to this again, every week we we know we're getting lots of new listeners, so we'll give you a little rundown of how this works. We go through what we call our AI Pulse where we take an informal poll each week of our listeners on how they feel about topics we talk about in that episode. And then we'll go through three main topics and then rapid fire items. So from episode 205. So from last week's episode, because then we had an AI answers episode on that was episode 206. So if you didn't catch that last Thursday, we dropped an AI answers episode. So the first question was OpenAI is building an enterprise deployment arm with private equity backing. What's your reaction? And this one, Mike looks like a perfectly split pie, basically. Yep. So 25% say, smart move. AI companies need distribution, not just models. 26% said, I don't have an opinion. 28% said, inevitable. Every AI company will do this within a year. And then 20% said it's concerning. It blurs the line between AI vendor and consulting firm. And then the second question we had was Anthropic's 81,000 person study found the number one fear is hallucinations, not job loss. Does that match your experience? 43%, the largest percentage said no. Job displacement is still my top concern. 34% said yes, unreliability is the biggest barrier to trusting AI at work. And then we had 13% at neither. My biggest concern is something else entirely. And 9% is I'm not particularly worried about AI risks right now. Hmm, that's interesting. And then we did ask one More how many AI tools does your organization officially approve for employee use? 45% said one to two tools. 34% said three to five, only 15% said six or more. And then there was a small sliver that said none. AI is blocked or not addressed. Right. I would imagine if you're listening to this show and you work for a company that's blocking everything, there's a decent chance you might not be at that company very long. You might be looking for a new career opportunity where you get to apply everything you're learning in AI. All right, so that's the ipulse you can go. We'll give you the new questions at the end. But it's just SmartRx AI Pulse to participate in those each week. All right, so we are going to start off today with it sort of this is the topic that kind of spun out of OpenAI canceling their Sora app, the individual app. And then we zoomed out and said, okay, like, let's talk about the bigger thing going on, because we touched a little bit on this last week, Mike, about how open I was refocusing their efforts. And I think we're starting to get a little bit more sense of why that's happening and kind of where this is going. And we wanted to frame it within the OpenAI versus anthropic topic. So let's kick things off there.

2:35

Speaker B

Yeah, Paul. So we had this week a major Wall Street Journal investigation that actually traces this OpenAI Anthropic rivalry way, way back to basically almost a decade ago to a San Francisco group house, actually, that in 2016, multiple players were living in. And it reveals that this feud, and it very much is a feud, is shaping the future of AI and is as much about personal wounds and power struggles as it is about kind of these bigger picture topics of philosophy or safety. So this piece from the Wall Street Journal is based on interviews with current and former employees at both companies. There are a ton of details in here that were previously never actually reported. And so the Wall Street Journal and Paul, you're going to kind of dive into a lot of these moments more in depth, but I'll kind of give like a very surface level view of some of the things they pointed out that have started this kind of rivalry between OpenAI and Anthropic. So tensions actually started very early. So after Anthropic CEO Dariel Amadei, before he was CEO of Anthropic, joined OpenAI in 2016, he watched Elon Musk very quickly thereafter order layoffs in ways that he considered needlessly cruel. He also watched Greg Brockman, of all people, float the idea of selling AGI early on to the nuclear powers on the UN Security Council, as they're all kind of projecting out, where is AI going to go? What should we do about it? And Dario, as early as 2016, 2017, started considering that kind of proposal, tantamount to treason, and nearly quit over it early on in his tenure at OpenAI. So when Sam Altman took over OpenAI after Musk exited in 2018, things apparently got more complicated. Altman made Dario a promise that Brockman and Ilya Sutskova would never be in charge or would not be in charge at the moment, and then turned around and made conflicting promises to Ilya and Greg. As research into GPTs took off, Dario blocked Brockman from working on the language model project. Daniela Amade, Daria's sister, who was co leading that project, offered to step down rather than let Brockman join. And apparently by 2020, relations had deteriorated to the point where Altman accused the Amades of plotting against him to the board. This all culminated in late 2020. Dario, Daniela and nearly a dozen employees left to found Anthropic. Before leaving, Dario wrote a memo arguing the ideal AI company would be 75% public good and 25% good for the market. Now, five years later, both these companies are valued at hundreds of billions of dollars and racing towards an ipo. Now, one of the reasons we kind of mention this is because in recent months, Amade has escalated the conflict sharply. He compared the Altman and Musk legal battle to basically Hitler and Stalin fighting. He called Brockman's $25 million pro Trump super PAC donation, which we might talk a little bit more about later, he called that just straight up evil. And he likened OpenAI to a tobacco company. Now this is all happening as there's some very real competitive pressures reshaping both companies. So this week, Paul, like you mentioned, OpenAI shut down its Sora video app and that was burning at one point a million dollars per day, and it dropped to just under half a million users. Fiji Simo, the head of applications at OpenAI, described Anthropic's gains in the enterprise market recently as a wake up call and told staff the company cannot miss the moment because we are distracted by side projects. So, Paul, the Wall Street Journal publishes this deep dive. There's like a lot of personal drama here. How much of this is just personal versus the bigger picture?

6:02

Speaker A

Philosophically, it definitely seems like there's just a lot of residual bad feelings, I would say. So, you know, again, if. If you're relatively new to all of this, like if you've been listening to the podcast for the last four years, you've sort of heard this story unfold. Now, as Mike said, there's details within this that we didn't previously know. A lot of these elements, though, were relatively known, Certainly the friction between them, but how it all kind of came to be. This is the most detailed unfolding of events that I've seen. The reason we want to talk about it is, is because it's so relevant to all the other things that are going on right now. You have this battle over government contracts where we've got anthropic being designated supply chain risk. And we'll talk about this in a couple of topics here. But, you know, the judge sort of putting an injunction in place to not allow that. But OpenAI steps in the day they're getting blackballed and is like, hey, we'll take the contracts. And so, like, for Dario, that this is just like daggers, basically. Like, they have this long history. They're both racing to IPO this year. They're both trying to beat each other to the market, basically. They're both being funded by a lot of the same people and companies. They're now in a battle for the enterprise, which every day I'm talking to companies and leaders of companies who are moving to anthropic. Like, it is a very, very common recurring theme I'm hearing. And so there's just a massive amount going on. So when you look back in retrospect, November 2015 is when OpenAI is created. So if you've been with us for a long time or you follow the space, it was created intentionally as a counterbalance to Google. So in the early days, it was Musk and Altman and Ilya Sutskova and Greg Brockman, and they wanted to be the alternative to Google, which they considered basically like the evil Empire, and they didn't want them to get to AGI first. So to create this nonprofit, to do this research out in the open, it quickly becomes not a nonprofit, which creates the friction between Musk and Altman that we're still seeing play out that will go to trial, I think, in April. It's like, coming up fast. And so there's just all this drama going on. But the way that the Wall Street Journal tells this is a lot of this does stem from Dario Amadei not getting the kind of credit he thought he deserved for his contributions to really the whole transformational phase we find ourselves in with language models. So it talks about, you know, again, they found it November 2015, Brockman tries to get Dario and Daniela to come join them. Greg is hanging out at the house, like this group house they've got. Greg and Daniela, if I'm not mistaken, worked at Stripe together because Greg was the chief technology officer of Stripe and Daniela was an executive at Stripe. So I'm guessing that's probably how they got to know each other. Certainly around that same time, was at least going on. And then Dario was working as an AI researcher at Google. So 2016, Greg's trying to get them. Eventually, they come over. They don't agree to come on as founders, but they come over pretty soon thereafter because Greg's hanging out with everybody. And then there's one other name that we haven't. We've probably mentioned this name, Mike, but I don't remember talking in great detail. So Holden Karnofsky. This is an. Probably an important element to this story. So Karnofsky was the founder of a philanthropy that promoted effective altruism, which is the antithesis of techno optimism. So you have the Silicon Valley venture capital world that is pushing for acceleration at all costs, and you have effective altruism, which is kind of seen as the counterbalance to that. So Karnovsky, who is Daniela's fiance, is a major player in this. And Brockman actually starts to take an interest in some of the ideas behind the effective altruism. Altruism movement. And so they start having all these debates. They're like, in 2016, they're having these debates around, okay, well, if. If we do end up building AGI, if it goes this direction, who should we be telling? Should we be telling Americans about this? You know, 300 million people that, hey, it's coming for your jobs, or should we go talk to the government first? And so Dario argued that when it came to sensitive topics like how fast AI was developing, it was actually better to go to the government first. So then by mid-2016, Dario joins the lab. He's up working late with Brockman. They're actually working on AI agents at that time. They're looking at video games and other things. This is when, you know, Musk is really heavily involved in opening I. Altman is yet not the CEO yet. Like, they're just kind of building this nonprofit. Ilya is playing a major role in the research direction of the company. And then this is when the layoffs happen, you know, led by Musk and sort of like, you know, consolidating things. At this time, fall 2017, Dario actually brings in an ethics and policy advisor and they're talking about sort of what's going on with the future research direction and the impact it could have and the need to get the government involved. And this is when Brockman, within the presentation, sees the fundraising idea that OpenAI sell AGI to governments including China and Russia. And Dario's like, this is a treason. What are you talking about? So it starts to create all this friction. Then Musk exits in 2018. So now we've got the blow up between Musk and Altman that leads to what today is now going to trial. Altman steps in, takes over as CEO. They start really going down this path of the for profit ideas. Karnofsky has since married Daniela and he's actually on the OpenAI board. And then tensions really start to flare when OpenAI researcher Alec Radford, who we haven't talked a ton about, we probably should have. Mike, in retrospect, this is a name that matters. He had laid the groundwork individually for these large language models. So he was playing around with this stuff, building off of the Google paper about Transformers and he's developing generative pre trained transformers or GPTs. So they start seeing the language model direction, like, wow, this might be something start in 2018, 2019. And so Brockman wants a piece of this. And Dario, who is research director at the time is like, no way. Like don't like Greg anywhere near this. And Daniela actually, who's co leading the language model project with Radford. So Daniella Amaday, she tells Brockman, you cannot work on this. And she offered to step down as head of project rather than allow Brockman on it. So you start to see the friction here is Brockman like over and over and over again. You hear this throughout these issues through the years. So when Brockman said that he and Altman were going to meet with former President Barack Obama, so they're now in like the GPT2, GPT3 range. Dario is now playing a major role in the development of this and the scaling laws and all these things. And Dario gets cut out of a meeting with the President. And so now he's pissed. Like, why am I not involved in this? That's when he gets a promotion. Altman does the thing like, you know, you said like, all right, you know, Brockman and Sutsko won't lead this. Like, you know, and so eventually Bihar is like, you know, he wants to leave and he's like, I want to report directly to the board or nothing. Like I'm either out or like I'm reporting to the board. I want nothing more to do with all this drama. He's seeing the difference between, like, market companies and public good companies. He's thinking they need to go in the direction of the public good. So it's just like it becomes this wild unraveling, and that eventually leads to them leaving. And then the thing I referenced earlier, Mike, that you and I were talking about right before we jumped on is Brockman's role in all this. So if we go back to episode of our podcast, episode 110 in August 2024, 117 in October 2024, and then 124 in November of 2024, we tell the story of Brockman taking a leave and what we eventually find. At first it was just he needed a break because he hadn't had one in nine years. And then it came out on episode 117 that the wall Street Journal revealed the sabbatical was actually a mutual agreement between Brockman and Altman, stemming from internal friction about Brockman's management style. Now, just to frame that 2024 time period, September 24th is when the O1 reasoning model comes out. So. So Brockman takes his leave in August. A month later, the breakthrough is released that they had been working on for a while, called Project Strawberry, which was the first reasoning model. Mira Moradi, who's the CTO at the time. Right, Mike. She then leaves, like, the week that they announced the. The reasoning model, and then Greg comes back. So it's just like, it's just wild drama, but it's all tied to what we're seeing play out today and the friction that exists between all these labs. They all know each other. Like, they all came up together. They were all working in the same direction and then just kind of started going in these different areas. The couple of notes I just want to make here, just on the context of what's happening, and it sort of leads into our next topic, Mike, is each of these labs is working on what I would call these dimensions of AI progress. If you've ever heard me give, you know, my State of AI for Business keynote or sometimes I'll work this into my intro class. But there's these different dimensions that the different labs are pushing on, and a few of the real important ones is agentic, which we're obviously hearing a a ton about. I'm gonna drop a link in the show Notes to a Lex Friedman podcast. I actually just listened to yesterday's three hours long. Luckily, I had to clean my garage out yesterday. So I was like, I had Three hours to listen. But it's an interview with Peter Steinberger, who created Open Claw. It's fascinating. So if you want to understand like the moment we're in and like the, the ex, what's happening with the agentic stuff and how these labs are so bullish now you, you listen to this whole thing, it's wild. So there's agentic in that realm. There's something called computer use which allows the agents to use your computer. Continual learning is a big one. Memory is a really big one. Reasoning maybe the most important one is recursive self improvement. It's this idea that as these labs automate AI researchers, those AI researchers, agents can then work 247 and they think from there we get to this recursive self improvement moment where the labs or the models can actually continually improve themselves without human insight and oversight. And that then leads to like the fast takeoff moment. And then world models is another one we've been talking a lot about, like Fei, fei Li, Yann LeCun, DeepMind, they're all working on these things. And so what this leads us to is you still have these like five frontier labs. So when you think about what leads to a frontier lab, they need funding, they need data centers, they need energy infrastructure, they need compute capacity like Nvidia chips, and they need the most powerful models. And so your tier one labs today are Google DeepMind led by Demis Hasabis, OpenAI, led by Sam Altman, Anthropic led by Dario Amade. Those are the three that matter the most at the moment. Your tier ones, then I would consider like tier twos would be meta with Zuckerberg. And they're kind of the wild card. They've fallen off for the last 12 months. Maybe they get back in the game. And then you have xai led by Elon Musk, which will go public as part of the SpaceX IPO later this year. And they're not relevant in enterprise right now in business, but who knows where that goes. And then tier three is like, maybe at some point Microsoft gets out of their own way and they figure this all out. But generally speaking, you have three major labs and two of them are at war with each other. And then when you go to the tier 2, the XAI, they're suing Open AI. So it is just this like completely wild, like unknown world we're heading into where basically these five companies are going to decide everything when it comes to the economy, business, geopolitics. And there's obviously labs overseas, especially in China. That, you know, should be part of the conversation. But I'm talking specifically about American AI labs. And so knowing these backstories and knowing these characters is actually extremely important because, you know, even I don't want to make this political, Mike, but when you look at that list I just gave you, DeepMind, Google, they somehow managed to stay politically neutral. You know, we'll talk about Sergey is actually on this new council Trump's created, but overall, like, Google is trying to just play in the middle because they know governments change and like, they gotta be in the game no matter who's in office, and so they're just going to play the game. OpenAI historically has been similar, but then Brockman shows up and gives, you know, 25 million or 50 million or whatever to the super PAC and becomes the biggest donor to Trump. So now, like, whether OpenAI wants to be perceived in that way or not, there's no avoiding the fact that their president is like the largest donor to the current administration. Then you have Anthropic, who is like the enemy of the administration right now. And they're very much like left center at this point. They're trying to, like, play the game. They're embedded in the tech, you know, the administration, everything they're doing. But, like, they're, they don't really believe in a lot of those things. And then you go to tier two and it's Meta and Xai are like hundred percent in with the Trump administration. Now, the reason I bring that up is because politics sways. And so, like, what happens if in two years or even, hell, during the midterms, if the power switches and then like the companies that have gone all in on one side or the other, what happens to those labs if all of a sudden the government doesn't award contracts to them? And you see what's happening to Anthropic, what if it flips and somebody does the same thing to Meta or Xai, then the only ones that are left are like, the politically neutral ones are just like, just trying to make the world better, hopefully. So it's just like, there's so many layers to this. And I think for people who care deeply about this and especially the downstream effects on the economy and the environment and things like that, it's really important to understand who's building this tech and what their goals are for it. And so you can kind of like pick and choose who you're, who you're cheering on and like, who you're following and who your company's investing in or whose technology you're using, like, it's, it's not a. A binary decision. Like, there are lots and lots of layers to all of this. So I'll stop there, Mike. I mean, I. I could honestly spend the whole episode just talking about this stuff. I just think it's really important for people to understand the complexity of what's happening.

9:53

Speaker B

Yeah, hopefully someone's writing a book about this whole background and history between these people. Because one thing that just jumped out to me, and then we can kind of move on, is whether you agree with the AI hype or not, all of these people have been taking seriously the prospect of AGI or something beyond it. Ten years ago, ten years ago, before they even had a company, before they even had a business model or anything they were taking seriously, who should have control of this technology, who should be notified of what and when. And now we're starting to see some of the fruits of that labor play out, where we're suddenly in this mode where people are starting to seriously worry about how powerful the technology is. So, yeah, who is behind the decisions really does matter.

24:12

Speaker A

I will say, when you listen to the Friedman podcast with Peter Steinberger, it sure as hell sounds like he's selling the meta. So he. I mean, I was actually shocked he talked as much as he did because Lex asked him point blank, like, everybody's got to be coming after you, like, what are you going to do? And he's like, man, I've scaled the company before with VC money. I don't want to do that again. Like, I'm not the CEO. I want to just build some stuff. And they say, well, who are you talking to? And he goes, well, you know, I've had great conversations with both Sam and Mark and both have some positive things, but I'm kind of excited about the idea of going and working at a big lab and just getting to build some stuff and have a lot unlimited GPUs to access and. And then he's like, yeah, Zuckerberg was just playing with OpenCloud building stuff and messaged me on WhatsApp. And. And next thing I know, I'm like, yeah, like, you want to call in 10 minute? It sure sounds like Open Claw is going to get acquired by Zuckerberg for billions of dollars and Steinberg is going to go there and become part of that superintelligence lab is. I can't see an alternative at this point. Let Sam pulls a rabbit out of hat and like, convince them to come to OpenAI. It seems like it's one of those two places right now.

25:00

Speaker B

All right, so next up, we actually found out that Anthropic accidentally exposed details of an unreleased model that is nicknamed Claude Mythos through an unsecured content management system is being reported as a Fortune exclusive. Roughly 3,000 unpublished assets were apparently accessible for a time to anyone without authentication from Anthropic's website, even things that weren't published. These included draft blog posts, internal images and documents about this unreleased model, and also about an invite only CEO retreat in the UK that Anthropic was running that was not public knowledge. The leaked drafts of this material describe Mythos, this new model, as a new tier above opus, which is, and which Anthropic says is larger and more intelligent than the OPUS models and has dramatically higher scores on tests of software coding, academic reasoning and cybersecurity. Now, after Fortune asked about this, Anthropic confirmed the model is real. They called it a step change over previous models and the most capable they've built to date. They also state Mythos is currently far ahead of any other AI model in cyber capabilities and warn that it presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders. So Anthropic actually planned to release it first to cyber defense organizations before making it more broadly available. Anthropic overall blame the leak here on human error in their CMS configuration. This is unrelated to their AI tools, kind of, you know, having vulnerabilities. According to them, though, it is important to note their entire brand is built on being the responsible alternative over here. And details are leaking out of this thing. At the same time, OpenAI says it has finished pre training its next major model. This is codename for the moment, Spud. And Altman told staff he expects a very strong model within weeks. And that, he said, can really accelerate the economy. So, Paul, two big models I don't

26:02

Speaker A

think means create more jobs.

28:09

Speaker B

Yeah, that might be a very intentionally worded way of saying that really accelerate the economy. So we've got maybe in the next few weeks, two huge models. Clearly, at least one of them is a bit dangerous when it comes to cybersecurity. When do you expect these to drop?

28:11

Speaker A

Yeah, I mean, who knows? Things change when they're going through, like the red teaming to make sure they're safe. It sounds like Anthropic in particular actually already has it in the hands of some beta users. So part of it depends on that feedback loop and, you know, when they're ready. But my Guess is if they've got stuff queued up in a CMS that's unsecured, it's ready, you're getting ready, you get ready to go to market. So these things probably finished training months ago and they've been in post training and red teaming getting them ready. But again, this is why we always say like, you cannot, you can't make plans based on your current experience with these models. There is always a more powerful model in training. The labs have always already seen six to 12 months ahead of what you know to be true about reality. And so like they, they know roughly what the capabilities are and they're probably just trying to make them safe at this point. So I don't know. This anthropic story is crazy. Like first I feel for the marketing team or whoever owns the keeping the

28:29

Speaker B

stuff possible ex marketing team at this stage, I don't.

29:29

Speaker A

Yeah, I would imagine somebody lost their job over that. And I'm just speaking from experience, Mike, of like, we ran a marketing agency. Like, I can't even fathom being the person that allowed that to happen. So yeah, part of this is a story about accidental disclosure, like a, you know, kind of a warning, I guess, for other people. Like, think about this stuff. Part about how much easier discovery of this sort of thing is going to be with agents where like, if you're competitive or if you're in more of the black hat kind of stuff and you're trying to find vulnerabilities and exposures and things like that, you just run your agents 24, 7 and go look for this kind of stuff. And then more most importantly is this idea that there's going to be a leap in model capability soon. So the exposure was 3,000 assets linked to this blog, which is crazy. The part that I found really interesting, Mike, is Fortune informed anthropic. So Fortune finds this. They actually bring in cybersecurity researchers to assess it, but then they alert anthropic to the fact that it's all there. And they, to my knowledge, have yet to publish any of the information other than like broadly saying a new model is coming and there's a CEO retreat. They didn't publish the blog post. Like, they have access to all this stuff, images, documents, blog posts. And for whatever reason, Fortune chose not to release the information. My guess is there's probably a quid pro quo here of like, we will give you an exclusive on whatever in the future, like don't release it. And in exchange you're going to get like, first look at the actual mythos. I don't know. Like, yeah, meteor relations works in funny ways, but there's got to be some reason they did not do this. So. Yeah, just. Just kind of wild. I. I'm. The bigger models worry me. I mean, they talk specific about reasoning, coding and cybersecurity. None of this is new. We've known all the models are getting better at these things, but just the fact of how prepared unprepared people are for what already exists and to know we're very close to these, like, next level models is worrying. Wall street reacted not great. So cyber security stocks is it feel like anthropic tanks. The market like once pick a category and like, so cyber security stock slumped based on news we had. CrowdStrike, Pelo Networks and Zscaler dropped about 6% each that day. Sentinel One tumbled 6% while Octa and Netscope each fell more than 7%. Tenable plummeted 9%. That was just on Friday. Like, just. Just the. The idea that a new model is coming that's better in sky, which is funny because we knew this. Like, it wasn't like this was, oh, wow, they figured out how, you know, to cause flaws in cybersecurity. It's like predicted this for two years. Yeah. But anyway, it's how Wall street works. And then the SPUD one, you know, we touched on last week, but it's, you know, the idea that they. Altman said the company would be renaming Fiji Moves Product Organization to AGI Deployment. Like, we are entering the phase where they. They truly all think we are approaching whatever you want to call AGI. Like, we are there. And that led me to go back, Mike, to the stages of AI that we've talked about many times on the podcast. But again, I know we have new listeners every week, so I think it's good to frame this. So back in July 2024, Rachel Metz at Bloomberg did a story called OpenAI Scale Ranks Progress toward Human Level Problem Solving. And in that she had gotten access to OpenAI's internal stages, which they later confirmed were in fact true. And so they came up with these five levels to track progress toward building artificial intelligence capable of outperforming humans, the company believed at that time. So this gives you a sense of how fast we've moved. So in a year and a half, OpenAI executives told employees at that time that they thought they were at level one, which was chatbots, AI with conversational language. So summer of 2024, less than two years ago. But according to a spokesperson at that Time they were on the cusp of reaching the second, which it calls Reasoner. So level two is Reasoners, which is human level problem solving. That goes back to what we talked about in the first topic, which was September 2024. So a couple months after this comes out, we get our first reasoning model. So in the, in a meeting in that summer, they actually previewed the O1 model that would then be released in September, Right when Greg Brockman was on his quote, unquote leave and Mir Morati was piecing out of OpenAI to go start her own lab. So level one chatbots, level two reasoners, level three agents, systems that can take actions, which we are right in the midst of takeoff with agents. Level four are innovators, AI that can aid in invention, which we are seeing early signs of. And level five, which is why openclaw becomes so critical to this whole conversation, are organizations or AI that can do the work of an organization. And that was a topic I didn't, I didn't want to have to get into in 2026. Like, my hope was that we would have another year or two Runway before we were talking about level five being within reach. But I do think that throughout this year, we're going to have a lot more conversations around entering into the early phases of Level five. I think innovators, we will clearly be at that stage by, by this fall. I think you can make an argument we're kind of already there in some disciplines, but I think across most industries, we will be clearly into level four by the end of this year. And I do think that in some industries you will be seeing very early signs of Level five. And I don't honestly know, I would guess they probably have a level six internally. I don't know what it is, but just. Just so people understand how fast we went from level one to emerging into level four in basically, what is that, 20 months?

29:32

Speaker B

That is a way shorter timeline than I would have thought.

35:29

Speaker A

Yeah.

35:33

Speaker B

All right. Our third big main topic this week is about a couple different comments from some CEOs that are a bit brutally honest, so to speak, about AI and its impact. So the first one comes from Uber CEO Dara Khosrowshahi, who broke what amounts to an unwritten rule in tech. This week he did an interview on the very popular podcast Diary of a CEO, and he said he has personally heard executives privately admit the true scale of AI disruption. And then watch those same people go on TV and tell audiences that everything will work out fine, which is something we have talked about. On this podcast, he said that he understands why they do it, because being honest about job displacement scares investors, drives up fundraising. However, he said he estimates that AI will eventually replace the work of that 70 to 80% of humans do, including in knowledge jobs within the decade and physical roles like driving within 15 to 20 years. Which begs the question. He was asked what to Uber's 9.5 million drivers and couriers do next. And he literally said, I don't know. Now, at the same time, we got some comments from PwC's US CEO Paul Griggs, who told the Financial Times that partners who are, quote, not paranoid about being AI first will be replaced. And he said, quote, I don't think anyone gets a free pass here. Anyone, an employee who thinks they can opt out of AI is, quote, not going to be here that long. Interestingly, PwC has cut 5,600 staff last year. They're shifting tax and consulting services into certain AI powered subscription tools that, at least in the first steps of operation, work without a PwC person in the loop. So, Paul, I thought those are two pretty telling comments. I mean, we've talked quite a bit about what you're hearing behind closed doors, how people are not talking about this publicly. Like, is the dam starting to break here? Because six months ago we wouldn't have heard any of this.

35:34

Speaker A

It feels like, yeah, I just don't feel like there's gonna be any way to avoid it. Like we've said before, every three months these CEOs have to get on earnings calls and it's getting really hard to not say out loud what they've been saying privately. So it does echo what I've been, you know, trying to create urgency around these last couple years, which is what executives are saying privately, telling me privately what they're going to do and what they're saying publicly have been two completely different things for like a year and a half, two years now. And so this is, you know, it's really where I'm spending a lot of my time. And, you know, I'm going on a trip with my family coming up here and I long plane rides and I, I think I'm going to use that time to just try and unplug and think more deeply about this, because you and I, I've shared a little bit, Mike, with, with you, of the direction I'm going here, and I've actually had some conversations with some, some listeners at some big enterprises who are thinking about these things as well. And no, you know, no names and things, but I Appreciate their perspectives on, on this. It's very helpful for me to think this through. Where I'm currently at is, I think AI forward managers and above directors, VPs C suite who have a deep understanding of AI capabilities plus domain expertise and institutional knowledge are going to be in good shape in the near term. Like, I think if, if you go all in on this stuff, you figure it out and you can help design workflows and systems and integrate agents. Like, you're going to be worth way more money today than you were yesterday. Like, and your companies will figure that out. So I think your career prospects, if you fit into that AI forward manager and above companies are going to be looking for that talent. I think professionals across all levels who are resistant to learning AI and evolving are going to have a very difficult time remaining employed where they are. As you highlighted with the PwC example and finding employment in, you know, once we get outside of the next one to three years. Like, I, it is the brutal reality that I don't, I don't like it. Like, but I just really feel like across industries, across jobs, people who just resist this for whatever their reasons are, and some of them are very good reasons and I empathize with those reasons. I don't know what else to tell you. Like, you, you won't be employable. Like, it's a very, very brutal reality. And then my biggest concern is entry level work. Like, I just keep coming back to this. I don't know what you're going to hire those people for. And I, I have some theories. Like I have some ideas I'm at least working toward to try and crystallize in my own mind. And that's why I need time to think more about this. But I, I don't know the answer to what those people do. When the layer above can do all the tactical work by just simply prompting a system, it can do all the things they would have done to learn the administrative work. Like all of it's going to be easily done by these models. And that's before we have the step change that's coming, you know, apparently this spring. So I don't know. And then you mentioned, Mike, this National Bureau of Economic Research paper, I dug into this one a little bit that was related. I had not seen this yet. Great use for Notebook lm. You and I both, Mike, you have a whole podcast, all of our episodes in NotebookLM. But it's a great summary thing for me. So I'll take these dense research. So I'll just read you the summary that NotebookLM wrote on this, I thought it was really helpful. So 2026 National Bureau of Economic Research working paper examines how AI is transforming corporate productivity and labor markets through a survey of approximately 750 financial executives. The authors identify a productivity paradox where executives perceive high performance gains from AI that have yet fully materialized in official revenue data, which we see all the time. We talk about that sometimes you got to look at leading indicators, but it's not going to show up yet in GDP or revenue within the organization. And then so while adoption is widespread, investment intensity and motivations differ significantly between large and small firms, with larger companies focusing on labor cost reduction which just talked about. Despite concerns regarding automation, the study finds minimal aggregate employment declines, suggesting that AI currently functions more as a tool for task enhancement than for total job replacement. However, a significant reallocation of labor is underway as demands shift away from routine clerical roles towards skilled technical positions. Ultimately, the research suggests that AI driven growth is primarily fueled by innovation and product development rather than simple capital capital deepening. And then the the one thing I'll note related to this Mike is the audience of people they interviewed is from November December 2025. So relatively new data given. Given kind of the moment. But they are interviewing CFOs.

37:31

Speaker B

Yeah.

42:13

Speaker A

And while there are exceptions to the rule of the CFO is generally not the person I've been meeting with in enterprises who has the greatest comprehension of the moment we find ourselves in from a technology perspective, what these things are capable of doing. So they don't always have the highest degree of AI literacy and capabilities themselves. They're not pushing the models every day and finding business cases. It's not usually the CFO doing that. And therefore those CFOs who you're asking about this might not even be aware of the reasoning capabilities or the agentic advancements that are happening. And then when you think like the research was done in December, that was before like the Claude code moment that basically changed everything before open claw. And like how much has changed just in three months. And so while it's always good to look at this kind of data, you do have to frame it with okay, who were they asking? What are. What is the literacy and competency levels of those people? Not that they're not super smart and super accomplished, but it's just not their job to be the one that's like staying up on all the latest model news. And so you have to again, it's just information. It's good. Put it in the filing system in your brain of like trying to understand the context of where we are and how to talk to people in your organization. But it's not an end all, be all. It doesn't mean that all of this is exactly true within your company or industry. It's a very dynamic, dynamic place, and we need all these different perspectives. But you have to piece together your own story, I guess, is what I'm saying.

42:13

Speaker B

Yeah. You know, one other thing that jumped out to me that reinforced a lot of what we've discussed over the past, you know, year at this Stage is the PwC US CEO Griggs, who said, basically, an employee who thinks they have the opportunity to opt out of AI is not going to be here that long. I assume that's been very clearly communicated internally. If he is telling the Financial Times that if it's not, that's probably a good memo to get out this week. But I know that can read as harsh to a fair amount of people. But I really appreciate the honesty because I know behind closed doors, I know of organizations where people are already complaining about employees who are not embracing this stuff because they know they have to. And so it's not been expressed to them clearly that this is a condition of their employment. Whether you agree with it or not, at least the expectations are very clear. And I think that's more important than,

43:38

Speaker A

yeah, it's like anything else in life. Like, I mean, think about kids and stuff. Like, sometimes you got to tell people the hard truth and they may not get it yet. And it might be, you know, in the case of your kids, it might be five years until they grow up. I'm like, oh, my God, my parents were right.

44:37

Speaker B

Like, right.

44:53

Speaker A

And I feel like this is kind of one of those situations where people don't want to hear this. And I totally understand it. Again, I complete empathy for how hard this is and honestly, like, how unfair it's going to be. But there. There is no. We have no control of that, like, this is happening. That the models are going to get smarter, they're going to get more generally capable, they're increasingly going to do the tactical things you and I do for our work every day. And pretending like that's not happening or that it's not going to affect you or your family or your peers is just not a winning scenario. And I agree, like, as harsh as it seems to say what the CEOs are now more increasingly publicly saying, I would much rather they just said it than pretend like it's not going to happen. And I know plenty of enterprises and leaders who know what's going to happen and just refuse to publicly say it or to say it to their own people. And I, I just, I don't know. I got. I would really rather we just dealt with the hard stuff now and had time to be proactive about doing something about it than pretending like it's just gonna be okay because it always has been before when general purpose technologies came into the world. Like we just figured it out. And it's like that is either either choosing to lie or, or being ignorant to how different this transformation is versus the previous generative or general purpose technologies. And there's not much room in between that. It's one or the other, largely.

44:54

Speaker B

All right, Paul, before we dive into rapid fire, just a reminder, this episode is also brought to us this week by our 2026 State of AI for Business report. We are currently in survey mode collecting data for this report. This is an expansion of our popular State of Marketing AI report that we've every year. So this year we are going beyond marketing to collect tons of data on how AI is being adopted and used across companies in every function. We are surveying already thousands of business professionals across all industries and functions. The survey period is in its final about 10 days or so here. So if you have not taken the survey and you're part of the podcast audience, we'd really appreciate getting your perspective. If you go to Smarter X Survey, you can take the survey. It literally takes like five minutes minutes to complete. In return for completing it, we will send you a copy of the full report when it drops. Plus you get entered to win a chance to get or extend a 12 month Smarter X AI Academy membership. So go to SmartRx AI forward slash survey. We are just about to wrap this survey up and we'd love for your voice to be included and I'll throw

46:38

Speaker A

in like a personal ask on this one. I mean, I know we have so many incredible people listening to this podcast every week that are in all kinds of different disciplines, roles, industries, and it would be extremely appreciated to just get your perspectives on this. Like Mike said, it takes like five to seven minutes and we want as diverse of backgrounds and roles and industries and departments as possible to make this data as valuable as possible for all of us. You know, we talk a lot about research and all these different ways of doing it and we want to make this like, you know, an example of what can be done in an industry, make it as real time as possible, get this turned around as quick as we can to give you all that information. So yeah, if you can take those five to seven minutes, it would mean a lot to me and Mike in particular and the rest of our team.

47:46

Speaker B

All right, let's dive into some rapid fire topics this week. Paul. First up, we have an update in the anthropic versus Pentagon saga yet again. So this past Thursday, federal judge Rita Lynn issued a preliminary injunction blocking the government's supply chain risk designation against Anthropic. So this is in response to lawsuits Anthropic has filed challenging that distinction. And in the ruling, Lynn wrote that nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government. And she also found that Anthropic is likely to succeed on the merits of its lawsuits. So this injunction blocks 17 federal agencies from enforcing this ban on using Anthropic, including the original February 27th order from Secretary of War P. Hegseth, and also President Trump's social media directive to not use Anthropic. The Pentagon is not backing down, at least publicly. So hours after the ruling, CTO Emil Michael, we've talked about in the past couple episodes, called this a disgrace. He claimed it contained dozens of factual errors and arguing, he argues that one of the two supply chain risk designations that they have put into effect remains in full force under a separate statute. The government has seven days to appeal this. So Dean Ball, who is a commentator we talked about quite a bit here as well, previously served in the Trump administration, called this a devastating ruling for the government. Paul, where does this actually leave us? I mean, I think we're hoping for, to see a resolution here, but it seems like this is just the next battlefront.

48:29

Speaker A

Yeah, I guess we're just waiting for an appeal. Close to the judge said everything that everybody was thinking, basically. I don't know. Sure seemed like this was just a vendetta. Like I've said on the podcast before, like I just egos and vendetta and politics. And it's not really about the technology or Anthropic. We touched a little bit earlier on. Again, part of the reason to go into the main topics up front was to frame this for people about how politics does unfortunately play a role in this increasingly. And yeah, so I think that that's, you know, really the key issue here is that it's becoming increasingly political and I don't know, I, I hope they eventually negotiate it. Like that's what I keep thinking is going to happen, that they'll eventually come down. But each side keeps digging in. So we'll wait and see. See what the appeal. I'm sure this will take forever to actually come out the other end, but it seems in the meantime, the government's going to keep using their tech anyway, so. Yep.

50:09

Speaker B

All right, so next up, some more political news. We have some more AI political moves this week. So first up, Senator Bernie Sanders and Representative Alexandria Ocasio Cortez introduced the AI Data Center Moratorium act, which would pause all new data center construction nationwide until Congress passes federal AI legislation that has protections for workers, consumers, the environment, and civilization rights. It is one of the most aggressive AI policy positions staked out this Congress. It is worth noting over 100 local communities have enacted their own data center moratoriums, according to the bill. Basically, this ban would only be lifted after passing those federal AI regulations or legislation that would have those kind of protections. So once the ban is in effect, they got to pass a law that actually satisfies the conditions here to get the ban lifted. Now, this bill is unlikely to advance, but does reflect some very real political pressure and, and shows how perspectives on AI like we've talked about are scrambling party lines ahead of the midterms. Now second in basically the opposite direction. President Trump has appointed Mark Zuckerberg, Jensen Huang, Mark Andreessen, Sergey Brin and other major tech leaders to a new President's Council of Advisors on Science and Technology focused on AI that is co chaired by David Sacks and Michael Kru conceives. Notable absences from this council so far include people like Sam Altman, Elon Musk, people from Google, from Microsoft. Rather, as a note, co chairing this council is going to be David Sachs's new role within the administration because he very recently stepped down as AI and crypto czar. Sachs also told Bloomberg that Congress could pass bipartisan AI legislation within months, creating a national framework that would override the patchwork of state laws. So we've talked about. The White House has recently released their kind of legislative blueprint or wish list for AI, which calls for child safety protections, streamlined data center permitting, IP protections, and more. It is an open question whether or not the two parties can cooperate to actually pass bipartisan AI legislation, especially before midterms. So, Paul, I'm curious what you make of these two recent developments. the very least, symbolically, that data center moratorium seems to be trying to tap into some populist anger about data centers.

51:07

Speaker A

Yeah. So as I said in the previous one, AI is becoming more political, which we assumed a pause is not going to happen. So their efforts to raise awareness about the issues is good. It's going to get citizens more educated and involved. Hopefully we don't play the fear mongering card, but it's definitely, it's, it is not to get the legislation passed. Like, right, that's not happening. I also would not hold my breath on any federal legislation around AI. Like, I think it's just a stall tactic to even be pretending like they're, they care to do that. I don't know when that changes, but I, I would be really surprised if there was actually any AI federal legislation before the end of the year. This council, there's almost nothing known about it. Like, yeah, I mean, the White House's own announcement about it was like three paragraphs long. And it just pretty much said, these are the people who've agreed to be on it. And then that it could be up to 24 members. That's pretty much all we know about it. So it's not really worth talking about much other than there's some big names on it and some names that aren't on it, which is noteworthy. And then the related thing is another pro AI pack popping up. So Axios had this. A new pro AI political operation is jumping into this year's midterms with a plan to spend more than 100 million, the latest push by a big money group to promote a deregulation agenda. So the group is dubbed Innovation Council. Action has the blessing of Sachs, who we talk about a lot lately. It's distinct from other pro industry groups in that it's focused on boosting President Trump's priorities. The new group is led by Taylor Budowich, a former White House deputy chief of staff for Trump. He's also formerly led the pro group Mega Inc. A super pac, and securing American greatness political outfits. He was a top official on Trump's 24 campaign. The group compiled a scorecard assessing how supportive lawmakers are of Trump's AI agenda, which will be used to determine who the group supports or opposes on either side. Probably, I mean, it's mostly going to be, you know, Republicans, but at this point they're just, they're just, they're going to fund anybody that, you know, is deregulation. Techno Acceleration says because the organization is a nonprofit, not required to disclose its donors. A dark money organization as general it's called. So Innovation Council would play a critical role. This is from Sachs. Innovation Council will play a critical role in advancing the innovation agenda championed by President Trump and this administration. We welcome its support at this important juncture. Other AI focused political groups include Leading the Flute, the Future with which has raised $50 million. That group list donors include Greg Brockman, Joe Lonsdale, who's a co founder of Palantir, if I'm not mistaken. Mike and Mark Andreessen of Andreessen, Horowitz and Meta, has launched a pro AI super PAC effort that is expected to spend around 65 million for midterms with plans to focus on state level races. So quick Math, just those three alone is almost 300 million in ads about AI deregulation and trying to elect people who want to accelerate at all costs. So that is why I would not hold my breath on any federal legislation. And it's just you're going to see more AI ads than you could want to ever see. So yeah, it's going to be, it's going to be interesting.

53:33

Speaker B

All right, so next up, as AI tools get more powerful, more people rely on them for real work, the security risks are scaling up just as fast. So we got this week a case study in what that can look like. So most AI tools are actually built on top of layers of software packages and in fact, often open source software packages that developers install and end up trusting to power their software. And Andres Karpathy, who we've talked about many times, former director of AI at Tesla and OpenAI co founder, flagged what he called a software horror, which was an attack on one of these open source packages that millions of people and thus programs depend on. So he outlined this in a post on X. This package called Light LLM has 97 million downloads per month. It is widely used across the AI ecosystem. And this past week, attackers slipped malicious code into a routine update. What it meant was anyone who had it installed had their passwords, cloud credentials, API keys and other sensitive data silently stolen and sent to the attackers. This spread far and wide because a lot of tools depend on Light LLM behind the scenes. So the poisoned version of this program was live for less than an hour, but it was only discovered because it had a bug that crashed a developer's machine. Carpathy noted that if the attacker hadn't made that mistake, this could have gone undetected for days or weeks. And the attack was part of a broader campaign that hit five different software ecosystems. So the point is, is here, the reason we're talking about this now is because AI agents are about to make risks like these much, much worse. So actually, OpenAI just backed a startup called Isara at a $650 million valuation. They're building software to coordinate thousands of AI agents working together. That sounds great. We're moving into this AI agentic era. But as agents start installing software, making decisions, managing systems on their own, this kind of thing where your agents are going to download open source software that has been poisoned or compromised is going to grow dramatically in its frequency. So Paul, I was curious, you know, carpasi highlighting this is a big deal. This is an enormous open source software program or package being used by people. I mean if you have agents running for you, how can you be sure you're not off downloading something that's handing over your personal information because it's been exploited or poisoned?

56:55

Speaker A

I have no idea. And I'm pretty convinced that most people using these things have no idea. Like, right. There's just so many unknown risks. And that's why like people, people I keep talking like I can't believe you're not doing this and that it's like, dude, I don't even understand the risk associated with that stuff. Like, yeah, so I, I'm just in no hurry to find out. And you know, I think part jokingly I've said for the last couple years like IP attorneys is like one of the safest professions to go into for the next decade because of all the, the issues tied up in a, in AI and the use of this copyrighted materials and all these things. Cybersecurity, that that's a safe profession. Like there's man. I mean it is just the, the surface areas where you can be attacked and the complexities that are going to need to be solved for to, to use this kind of stuff within enterprises is endless. So I don't know, I mean that's the thing is like we're going to race ahead and have these really advanced models and these agentic capabilities and, and then like the risks just compound when you start doing this stuff. And that's going to create a lot of friction for adoption within organizations and you know, which honestly at the end of the day probably can be a good thing. Like right. The model companies aren't going to slow down. And so I, I think like enterprise and human friction might be the only thing that saves us here. It's just that like it's going to take a while for us to figure all this out and integrate it into what we do. And just because the models are capable of replacing some human labor doesn't mean they're going to right away. And in the end that's a good thing.

59:30

Speaker B

I think, you know, it's almost the flip side of what we talk about as the benefits of some of these tools where, you know, we talk about vibe coding or vibe marketing or whatever, AI gives non specialists this ability to do specialist things. But there's a danger there because now I'm suddenly exposed to all sorts of decisions that are in domains I have no experience in. So like if I want to go vibe code something and an agent recommends, hey, we're going to go download these three open source packages to facilitate what you want to vibe code. Okay, great.

1:01:08

Speaker A

Right.

1:01:43

Speaker B

But like there's probably 18 different questions that a software developer would have that I don't even know to ask that are very dangerous.

1:01:44

Speaker A

Yeah, we talk about this like I can build apps all day now. Like I can just play around and claw and just build some stuff and it's amazing. But to move it into production and to put stuff publicly live and open up, it's like that's not my area of expertise and something I'm trying to solve for and like we'll figure it out. But I'm in no rush to like put things out before I understand what we're doing.

1:01:52

Speaker B

Yeah. All right, our next topic. Apple is planning to open up Siri to rival AI assistance in iOS 27. They are ending ChatGPT's exclusive role inside Apple software. So according to Bloomberg, users who have Google Gemini, Anthropic, Claude or other AI apps installed will actually eventually be able to route Siri queries directly to those services through a new extensions system in Settings. Apple plans to announce these changes at WWDC on June 8th. So this basically eliminates the need for one off integration deals like the original OpenAI partnership. Any AI app in the App Store could potentially plug into Siri and Apple is actually going to take a cut of paid subscriptions through its payment system. Separately, Apple is building a standalone Siri app like we've talked about with a full chatbot like conversation interface and a unified search system. The goal is to transform Siri into from a voice assistant into an actual system wide AI agent. But a lot of these updates were first announced in 2024 and have been delayed multiple times. And lastly, behind the scenes the Information reports that Apple's partnership with Google is a bit deeper than previously known. So Apple has complete access to Google's Gemini model in its own data centers and is actually able to distill it into smaller models that run directly on Apple devices. So Paul, some interesting updates here. Most notable that Apple is trying to extend or expand the types of AI that can be used with Siri, in the meantime, while they apparently get their

1:02:14

Speaker A

act together, continue waiting game like it's at some point Apple's going to figure it all out and, you know, they'll show up and it could change everything from an adoption perspective and from a usage perspective, because they have trust and they have access to everything. Like, all the apps, all your data I've talked about, like, if they solve the health side, I would totally rely on Siri more than I would anybody else because they already have all that health data in my phone. So they're the wild card here. And it seems like a smart strategy to just let everybody else spend the hundreds of billions of dollars building data centers and energy infrastructure and frontier models, and they'll just, they'll serve them up to the billions of people that use their devices and not try and compete in that game. So in the end, I mean, it, it may work out in their favor that they just missed the game up front and they're going to kind of show up late and figure it out. The one thing, I don't know if I'm thinking in the right direction here, but, like, doesn't this make Perplexity just, like, irrelevant? Like, I mean, we don't talk about Perplexity much anymore anyway, but isn't that like, their whole thing is like, you can just choose whatever model you want and like, connect whatever you want and like.

1:03:51

Speaker B

Yeah, that's a big selling point of Perplexity and some other tools.

1:04:59

Speaker A

Yeah. Like, if I can just do that through Apple, through my Mac devices and through my iOS devices, like, what would I ever need something like a Perplexity for?

1:05:02

Speaker B

Yeah, just another chapter in Perplexity needs to get acquired quick.

1:05:10

Speaker A

Yeah, yeah, sell the top, which would have been 18 months ago.

1:05:15

Speaker B

Right. All right, so next we have kind of a new segment we're trying to do every week here. We hear from listeners all the time that one of their favorite parts of the show is when we talk about how we're actually using AI at SmartRx. And we do that in a bunch of different contexts as part of different topics. But we wanted to try to make this kind of a regular segment. So every week we're going to attempt to give you a quick, dedicated look under the hood at real AI use cases that we are exploring, exploring, building, or deploying in our own work. So, Paul, obviously, you know, you're working a lot on leadership and strategy items, as well as just overall organizational design. I'm working a lot on content marketing, sales enablement, productivity, performance. So between the two of us are definitely covering quite a bit of a lot of the different types of knowledge work you might be doing if you're a listener. So we are going to share direct from us kind of what we're doing week to week. So to kick us off, Paul, you have been working on some stuff related to kind of AI learning journeys. Maybe tell us about that. After we talk about that, I can share what I've been doing with some AI powered slide creation.

1:05:19

Speaker A

Yeah, sounds good. So, like I've said before, some of the stuff, honestly, like, I just traditionally wouldn't even talk about publicly before we just did the whole thing. But I would say, like more and more we're just trying to kind of build in public to a degree and share what we're learning as an AI native company and try and just help other people along. So, yeah, I mean, I'll share a little bit about kind of what I've been working on. So I spent a lot of my time more on like the vision and innovation side of the company and trying to think about how this technology empowers us to innovate in new ways, build new things, create more value faster for, you know, our, our customers. And so on our AI Academy side, which is where a lot of my time goes, I always tell the team, like, we're not in the business of selling courses. We're trying to power personal and business AI transformation. So, you know, you can go to LinkedIn learning and get amazing courses. You can go to Coursera, you can go to Udemy, you can go directly to OpenAI or Anthropic, everybody's Google, they all got great stuff. And like, we would recommend those courses. Like, we're not trying to compete with any of those companies. As a matter of fact, we would do deals with those companies like collaborate, partner, things like that. And we, we have some partnerships in the works with a number of those companies. So I think more broadly about, well, what does it take to actually drive a transformation either individually, like for me as an individual leader or practitioner or for my organization. And then what role specifically does courses and certificates that we talk about on the podcast, like, what role do those play in that bigger transformation? But I think more broadly because we talk to these companies every day and you realize, listen, you can buy courses from us and get access to all these things and it's going to be great. But that is like one part of the transformation story. You need to think, you know, more holistically from a change management perspective. So we need assessments, employee surveys, Executive briefings so that they're on the same page. Employee communications plans to, like, roll this stuff out and tell them jobs are changing and future of work looks different. You need the learning management system, the courses, which is what RA Academy plugs in. You need personalized learning journeys that like personalized use cases and tech based on departments and roles and things like workshops. So I've basically been devising what I'm calling an AI transformation system. And so this is something I'll share more publicly and kind of publish some stuff on. But generally speaking, I look at holistically and say, what are all the components you would need to actually drive this transformation? And then how can we help people visualize those things? And so I've been working on this for a couple years and different elements of it. And I made a lot of advancements in the last two weeks in particular, but the design and the visualization of it is just not my area of expertise. And so I have sketches. Like, I literally lost a sketch at a hotel in Arizona. I left it behind years ago. It was actually probably spring of. Of last year. I left this thing at a hotel, and I hadn't taken a picture, but I remembered I actually had a friend who was the designer take a picture, and he sent it to me. Like, thank goodness I was able to retrieve it, but I can't get there. Like, I just kept running into these barriers where I couldn't figure out a way to visualize this thing. So then last week I thought, well, wait a second. What if I just wrote, like, as I would a project brief for a designer or developer? What if I wrote the whole story of what I'm trying to do? And so last, like, Tuesday or Wednesday, I spent three hours writing a prompt, and I'll just, like, I'll read an element of this thing. The whole prompt is 1100 words and 7, 200 characters. So it's like, it's not an insignificant prompt. But I said, I want to create an interactive visualization representing paths of AI transformation across our AI transformation system. It's a collection of resources and systems that accelerate literacy and success. The core component of our element of the AI transformation system is our AI Academy. We see literacy as a fundamental part of personal and business AI transformation. And personalized learning journeys are at the heart of what different our approach. We want to show learning journeys that are made possible by our courses and experiences, but we also want to convey how those are just part of the overall process. The visualization should convey a sense of time and progress. Now Again, there's another thousand words to this thing. So then I just put it into Gemini Claude and Chat GPT. Gemini gave me an infographic, so that was useless. Claude gave me a solid V1 with a drag and drop capability of building these custom journeys and timelines where you can go by month. And it was amazing. And then GPT 5.4 thinking gave me a really solid prototype that was similar in style to Claude, where I could interact with it and I could actually build these custom journeys. So both of those were far beyond anything I had conceived of with my sketches. And I was like, okay, my sketches were just obsolete. Like, they were. They were more like what I got out of Gemini. It was basically like a. I got a illustrated version of my sketches from Gemini and the other two things gave me a totally interactive thing. So, you know, I think it's a really interesting example, but I think it's a good example of the need to test multiple models when you're doing these high value use cases. And to think about this project brief approach, like, if you really want to do something high value, take the time to write a prompt as though you were giving it to an outsourced person or an internal person who's going to run with that project. Think the whole thing through. And I did it in depth. I thought through every element of the transformation system I'm designing. I gave descriptions for anyone, every one of them. I built our entire, like, course catalog into this thing. Like, it was very extensive. And then it's one of those where you just like, okay, that's as good as I can do. Hit, go. And then you just sit back and like, pray and wait. And then seven minutes later you're like, holy. Like, this is so. I can't believe it just did this. And then you start like moving things around and using the filters and you're just like, oh my God. Like, yeah, I mean, just months and months of work and what would have cost me tens of thousands of dollars easily to work with a developer to build, I. I had in seven minutes. And I, yeah, it was just shocking, but in an amazing way.

1:06:27

Speaker B

That is incredible. And yeah, definitely seems like some very different outputs based on the model.

1:12:04

Speaker A

Yeah, for sure.

1:12:10

Speaker B

So I will just quickly also share something I've been working on. So, you know, I am obviously creating quite a few of our course series for AI Academy. And there's this big problem I run into every time I sit down to kind of do a course series, which is, you know, I spend weeks on research, synthesis, scripting outlining and basically wrap up the course, except it's still not really wrapped up. I face a final slog, which is I have to literally create hundreds of slides before I can record anything. Each department series we do, for instance, has four courses. That's hundreds of slides per series. We have tons of templates. We've streamlined this process quite a bit, but it still takes hours of work to do. And unfortunately, especially it's not even the hours. It's like, not intellectually rewarding work. Let's say that to be nice. But what I've been trying to do for months and months and months is try to get AI to create slides for me. There's like, generic AI slide tools that have been decent for a while, but we have, like, really specific branding and templates, for better or for worse. Like, we have something we have to follow. So I can't just say, hey, create a deck for me from scratch. And it's good. You can do that. You've been able to do that for months. I had something that's like a little more bespoke. So finally I was actually able to get Claude code to do this with a pretty high degree of fidelity for the specific stuff I'm working on. Your results and your mileage may vary, but what was really cool about this is, you know, basically taking the time up front from what I've learned with what works and doesn't with Claude code is to really pull together an excellent set of example files and guidance and actually put it into planning mode before creating anything. So being like, here's what we're trying to plan out, here's all the nuance and context, here's what's gone wrong in the past, and by the way, here's a folder with all the examples. And after some wrangling back and forth, I actually got to a point where we now have a skill where Claude code can take some scripts and actually put those into your presenter notes in the right places and for each slide and actually build the slides for you with some placeholders. It's not perfect at everything, but my gosh, like last Friday, I think it was, I got to a place where instead of hours and hours and hours, this process took maybe 20 minutes, like back and forth, obviously hours and hours before that to make it actually work. But my gosh, I was so happy that I finally got to this point here. And I don't know if it's, you know, my. I think it's a combo of, you know, I tried this a couple months ago with Opus 4.6 and Claude Code. But now I think it was a more diligent approach to the context. I think also because Claude has gotten better with PowerPoint, this might have been the unlock. Maybe. You know, sometimes even just trying this with the same approach before, you just need a few cracks at it before it actually takes. So really cool stuff. Highly recommend trying it out.

1:12:12

Speaker A

No, that's awesome. And that does go back to, like, 2023, when we were getting these early previews from Microsoft and Google of what was to come. And these, like, all of your productivity apps are just gonna have AI infused in. They're gonna do these things and it's like, okay, cool. So like PowerPoint. Yeah, we'll get to that. And then it ends up Claude builds a better way to do PowerPoint than Microsoft does. So it'll be interesting, Mike, because so Mike and I, you know, obviously I build courses also and do public speaking. I build slide first. So Mike's a script guy. Mike develops the scripts first and he does it. I actually don't script things, and so I'll. I'll often do, like, an outline of what I want it to be, but I generally build best when I just start putting slides together and then I'll form my thoughts from there. And then sometimes I'll put speaker notes in. But most of the time for me, when I do presentations or courses, I don't have scripts at all. And so it's like a. You know, I don't even know that Mike's approach will work for me because I'm. We just approach. Our workflow is different, but it's awesome. And it's like, wow, it almost makes me want to, like, try scripting the next time I do it to see if I can, you know, figure it out or. Or at least say, like, here's my deck. And you make you like, let's make this better. But yeah, it's interesting. Everybody's got different workflows for how they do these things.

1:14:59

Speaker B

No, that's a super important point, too. And that's why I kind of dissuade people from saying, like, look, like I could give you the skill if you wanted to. Like, the Claude skill, like, it's going to be useless to you. It's so bespoke to what I do and how I work. Plus, with something like Claude code, it's referencing other skills and preferences and memories about what I like and don't like. It also requires, like, eight other skills that are required for course creation. So the point here is just Know what's possible and then go experiment. Doing it on your own in your particular context I think is really the most valuable takeaway for me.

1:16:13

Speaker A

Yeah, and it goes back to the whole the AI transformation system idea I was sharing. It's like personalized use cases are so critical and if you just approach this broadly and said, all right, let's do, let's automate the creation of PowerPoints or, you know, Apple Keynotes or whatever, Google Slides, that's not even, that's not uniform because you have different workflows, you have different ways people think. And, and so you really have to drill in and create these very specific, like, personalized use cases. And when you do that, right, and you take the time up front, that's when you can unlock dozens or hundreds of hours of productivity or efficiency by just doing a little extra. Like, Tate, like, I did like take that three hours, write the thorough prompt, like, think it through. Like you're going to give that project brief to somebody. So yeah, and it's cool, like Mike, I mean, Mike and I see each other all the time, but like, we're, we're, you know, doing, we're busy, like doing whatever. I don't even, like, sometimes we don't even hear about what each other's working on us, like passing, like grabbing a coffee, like, oh, dude, did you see this thing I did? I was showing real quick on my computer and it's like, oh, we should talk about that on the podcast. All right. And then like, we don't talk about again until we get on these episodes. And it's like, oh, sweet. I didn't even know we, we'd figured out how to do that internally. That's cool, right?

1:16:46

Speaker B

All right, Paul, so for this next segment, we did this for the first time, like really formally last week where we're kind of spotlighting kind of what we're working on with AI Academy. So each week we're going to start spotlighting one of the courses in AI Academy that is currently live. And the real point of this is like, Paul, you know, you had kind of teed up for me which course we're talking about this week. And you know, it won't always be me, but me being the instructor of today's course that we're going to talk about. We'll either talk to myself or another instructor and give you a peek behind the curtain of like, what's actually in these courses and give like a value driven takeaway for, from the course that you can use right now, whether or not you Even take the course ever or do anything with AI Academy. Well, kind of bring some of the value we're creating in AI Academy to the wider audience.

1:17:55

Speaker A

Yeah, we got AI for Sales this week, right?

1:18:46

Speaker B

Yes. So this week we are talking AI for Sales, which is our four course certificate series built specifically for sales professionals. So I was going to maybe just run through a couple big takeaways that came away from that one. For me, if you're a sales professional out there, these I think could be pretty helpful in kind of getting you started or taking you further with AI. So first up, what really jumped out here to me as I was putting together this course and doing research for it is that sales reps really only spend about 30% of their time actually selling. And that number has not changed in several years, according to some research from Salesforce. I believe their state of sales AI report. And basically, you just spent way too much time on stuff that is either leading up to the sale or that is admin or distractions from the sale. And so what we do in the course is we really approach this very practically in helping you find those immediate use cases that can actually free up your time so you can sell more. That's kind of the goal. Number one of this course is there's plenty of other bigger strategic level considerations that are going to take you A to Z through your AI journey. But it's really about making you more productive, freeing up your time so you don't have to do all this stuff that's a distraction. And kind of the way we do this is first, we start out with the advice of when you're looking for your own AI use cases. We go through tons of strategies to do this in the course. There's one that's really helpful, which is just this simple filter to think about. So number one, kind of run what we call the checklist test, which is if you're thinking about all the stuff you do in a day, if you can write the steps out for that, like if you could teach this to a new team member pretty easily and they could follow it without needing to ask too many questions, guess what? That is something you should highlight as a candidate for AI automation or augmentation. So any sales rep can sit down right now and in 10 minutes kind of walk away with a few ideas of like, what are you doing in a day that has the same steps every time? You do not need to be doing that yourself. Now, takeaway number two here for sales pros is when you go to think about, okay, well, what AI can do that stuff. For me, this sounds simple. We've said it on the podcast before, but it could not be more important for salespeople. Specifically is audit your existing tech stack before you buy anything new. Now, all the new AI shiny stuff we talk about is incredible, but sales does so much in existing systems that you can pursue longer term technology projects and new tech that you want to integrate into your CRM. But look to your existing CRM and systems first because things like Salesforce, Einstein, HubSpot, Microsoft, they all have really powerful AI increasingly baked in. And even if the AI is not perfect, using the thing you already have that's already approved makes your life so much easier. And then finally, we've said this before about knowledge workers in general, but especially salespeople. I think you're on the go so much. You're so you have so many things to do, you need to be focused on quota. It's really important to remember this basic but powerful advice which if you're still prompting AI of any type, whether it's in your CRM or a separate tool, if you're prompting it like a search engine, you need to actually maybe evolve your approach so we actually show this side by side in the course. You know, a one sentence generic prompt is going to give you a very simple, very generic output. However, if you really structure your prompt with things like giving the AI a role, telling it its task, giving it context, examples, telling it what format you want, that is the way you get truly exceptional results from AI. And I hope my previous example actually about slide creation for instance, can communicate that the more context you give these tools. What Paul was saying with the extensive structured prompt, this is the way to get real value out of tools if you have not already.

1:18:48

Speaker A

And so many of those mic are takeaways for really any. You know, we're spotlighting sales here, but really that those three steps are applicable to whatever department you're in or whatever your role is.

1:22:49

Speaker B

Absolutely. All right, Paul, to wrap up this week we've got some AI product and funding updates. So I'm going to run through these and if anything is comment worthy, feel free to chime in.

1:22:59

Speaker A

Sounds good.

1:23:09

Speaker B

All right, so first up, Harvey is the AI platform for legal work that is used by over a hundred thousand lawyers across 1300 organizations. They just raised $200 million at an $11 billion valuation for this startup. So their total funding is now exceeding $1 billion. Next up, the OpenAI foundation announced it will invest at least a billion dollars in 2026 across life sciences jobs and economic impact, AI resilience and community programs. So this foundation actually received a 26% equity stake in OpenAI as part of the company's restructuring. It's worth about $130 billion on paper. Also related to OpenAI, they have shelved plans for their adult mode chatbot indefinitely. That follows pushback from staff and investors about the effect of sexualized AI content on society. So this joins Sora on the list of these side quests being dropped as OpenAI refocuses its core business. Anthropic has launched Computer Use and a feature called Dispatch for Claude Pro and Mac subscribers on Mac os. So Computer Use lets Claude control your mouse, keyboard and screen to complete tasks across applications. Applications Dispatch enables continuous conversations across devices, so you can assign Claudette tasks from your phone and pick up the results on your desktop. Google has set a 2029 deadline for migrating its systems to what they call post quantum cryptography. They warn that quantum computers are going to pose a really significant threat to current encryption standards, and it might happen a little earlier than they expected. So Android 17 is already integrating quantum resistant protections. SpaceX is preparing to file its IPO prospectus with regulators. They're targeting a June public listing. Advisors predict the company could raise more than? 75 billion, which would actually surpass all the money raised by US IPOs last year combined. They were last valued at 1.5 trillion. Microsoft has told managers at its Azure, Cloud and North American sales divisions to suspend new hiring, citing the need to restrain costs and improve margins. So this freeze covers tens of thousands of employees. Microsoft stock is down significantly this year. It's one of the worst performers in big tech. And finally, a cluster of news about Meta this week. Mark Zuckerberg first is building a personal AI agent to help him be CEO. So he helps him retrieve information he'd normally go through layers of people to get. Meta employees are now using personal agent tools like My Claw and Second Brain, as they're called internally to talk to colleagues and their agents on their behalf. And apparently AI tool usage is now a factor in employee performance reviews. CTO Andrew Bosworth is taking over Meta's AI for Work initiative, overseeing the push to make the 78,000-person company as nimble as AI native startups. Meanwhile, Meta has launched a new executive incentive program that to fully pay out, would require them to have a $9 trillion market cap by 2031. That's a 500% increase from the current 1.5 trillion. And finally, on the research side, Meta introduced something called Tribe V2 a tri modal brain Encoder foundation model that is trained on 500 plus hours of FMRI recordings from 700 plus people. This creates a digital twin of neural activity and enables predictions for how the human brain responds to sights and sounds. That last one sounds kind of a little sinister, Paul.

1:23:09

Speaker A

I, I hate ending podcasts like this, but anybody but meta. I would have liked to have seen this research. Like what, what is a social network going to do with that? Like predicting how human brains respond to sights and sounds. Like what? I. I can't come up with a positive use of that.

1:26:48

Speaker B

Yeah. What are they going to do? You can answer what positive use. I don't.

1:27:05

Speaker A

I know what they're gonna do with it. I'm trying to figure out like what is the other thing that could. Good, good could come out of that, right? Yeah. When I saw that. Reese.

1:27:10

Speaker B

God, stop.

1:27:18

Speaker A

Yeah. They don't have the best track record of doing things like that for the good of humanity.

1:27:20

Speaker B

Not exactly.

1:27:24

Speaker A

Maybe, maybe they'll turn, turn, turn a positive way though.

1:27:27

Speaker B

That would be nice. You know, we can, we can see some positive news. One final reminder here. We mentioned top of the episode, our AI Pulse survey this week is going to be in the field SmarterX AI forward slash pulse. This week's survey, we're going to ask about your perspective on some of this company messaging about AI and job. We're also going to ask your perspective on the new data center construction in the US and get your thoughts on all of that. So if you could please go take The Pulse at SmarterX AI forward slash pulse, we'd love to hear from you. Paul. Thanks for breaking down a busy week in AI for us.

1:27:30

Speaker A

Yeah. And quick note, so we're going to be next Tuesday, which is April 3rd, I believe. So our regular weekly is going to be replaced because I will be not available to record it. So Mike and I are going to do something different. We're actually going to do a quarterly trends briefing that we have to find time, Mike, in the next two days to record.

1:28:07

Speaker B

Yes.

1:28:30

Speaker A

So we are going to drop an episode next Tuesday, but it's going to be Q1 trends briefing. So we're going to look at everything that's kind of happened over the last quarter. We usually do this as part of our AI Academy. We're thinking about moving this to where our academy members may actually be able to join live in the future. Not for this one, but as a value add for our members. We may actually do it, but we're thinking about moving the trends briefing to a regular podcast episode because it's so valuable and it's so helpful to frame this for everybody. So just something to look forward to next week. Again, no weekly. We're going to do our best to catch up on all of it when we get back. April 14th I guess would be the next weekly we'll do, but we will have an episode for you next week while I'm away and It'll be a Q1AI trends briefing for business. So so keep an eye out for that. And yeah, have a great week and a half or so before we talk to you again. And we appreciate it. Have a great week. Thanks for listening to the Artificial intelligence show. Visit SmarterX AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in person events, taken online AI courses and earned professional certificates from our AI Academy and engaged in the SmartRx Slack community. Until next time, stay curious and explore AI.

1:28:30