This is the Everyday AI Show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business, and everyday life. Anthropic has a new scary model. Very scary things are happening to open AI CEO Sam Altman. Microsoft is reportedly panicking about copilious performance and somehow Meta's newest AI model is crushing it? Yet, that's not even half of what's moving and important in the AI world right now. That's because open AI and Anthropic both confirmed big desktop releases this week. Oh, and open AI wants to tax robots and let humans only work for days. Yay? All right. If you missed any of that or everything else that we're going to go over today, don't worry. That's what we're here for. This is the AI news that matters. If you're new here, welcome to Everyday AI. We do this, well, every day. It's your daily live stream podcast and free daily newsletter helping everyday business leaders like you and me not just keep up with everything that's happening in the AI world, but how to make sense of it and to get ahead and grow your company and career. So if that's what you're trying to do, awesome. Starts here with the unedited, unscripted live stream podcast, but to be the smartest person in AI at your company, make sure to go to our website at youreverydayai.com. All right. There you can sign up for the free daily newsletter and we'll tell you everything else that's happening today, but on Mondays, we give you the AI news that matters. So don't spend all week spending multiple hours a day reading things and being like, oh my gosh, is this real? Is this fake? No, just join us on Mondays. It's a great way to kick off the week. I do this literally nonstop 24 seven to help you. All right. So let's start off with our first piece of AI news, which is actually a big one and it hasn't happened yet, but the companies have confirmed that it is. That's because we have, well, things are heating up just like the weather here in April and Chicago. It's like, my gosh, it's finally more than 20 degrees. But as the weather is heating up, so is the AI competition. So we are getting new releases this week from Anthropik and from open AI. So here's what we have via reporting from testing catalog. So Anthropik is preparing epitaxy, epitaxy. I don't know. What are these code words? Can we get some easier ones? Epitaxy, a major power user redesign of Claude. So according to details uncovered from Claude code, the internal project is co-named epitaxy and it signals a big shift toward a more professional power user focused desktop experience that could ship this week. So the redesign interface introduces a single window layout with dedicated panels for plans, tasks handled by sub agents and diffs, plus live coded previews and support for working across multiple repos. So Anthropik is also developing a new coordinator mode, which would allow Claude to manage and delegate work across multiple parallel sub agents while concentrating on higher level planning in synthesis. Users will also be able to reportedly create agents directly inside the app on the fly. All right, now with open AI, we've seen a lot of rumors swirling lately, but it seems like they're all but confirmed as multiple members of open AI did say that they're going to be shipping some major updates this week in codecs. And well, that's because they're quietly starting to test a new scratch pad feature for codecs that would let users run multiple tasks in parallel, a move that points to a big expansion of codecs beyond just coding and moving into a central hub for AI driven work. So yeah, essentially this scratch pad, you type out a bunch of things, right? It can be notes, and then they turn into chats and then codecs just does them. So think of you were to, you know, leave yourself to do's and then codecs just does them, right? So pretty cool. So these references suggest that open AI is also consolidating chat GBT, the Atlas browser and the different software and agentic engineering tools into the single super app, which we've been talking about over the last couple of weeks, but it does appear any ways that codecs may be kind of the final landing platform. I'm not sure if that's ultimately going to be true if they're going to name it something else, but it does look like a lot of these rumored features of bringing the different functionalities all into one are at least being debuted in codecs. So we have seen reports that codecs may be the ultimate winner here, at least when it comes to desktop software. So yeah, if you don't know, you can use chat GBT on the web, but also the desktop software, you have the chat GBT app, and then you have the, the codecs app. So in the new kind of leaks here from open AI, one of the most telling discoveries is a heartbeat system designed to maintain persistent connections with long running tests. And that, well, if it sounds like open claw, yeah, that's because that approach does close the mirror systems already used by open claw in Anthropics managed agent project Conway, which we talked about last week, making open AI's move a clear competitive response as a desktop play separately. Social media posts from open AI employees featuring snowflake emojis. Yeah, we're talking about snowflake emojis here on the show have sparked speculation about a new model release codenamed Glacier that some believe to be GBT 5.5 raising the possibility that open AI could pair a major platform launch with a model upgrade in the coming days. So yeah, maybe we'll see the, the rumored GPT 5.5, maybe we'll see the full super app or maybe this week we'll just see a codecs release with some of the other features kind of baked into codecs. My guess would be the latter, but on this one, my guess is as good as yours. Don't have any inside Intel on this one at least. All right, next. Anthropic has launched their new clawed managed agents to make AI agents easier to build, run and scale. So it is in public beta and it's offering a full production stack that lets developers build and deploy cloud hosted AI agents without managing infrastructure themselves, which makes this a notable step toward a more practical enterprise ready, agentic AI. So if that sounds super confusing, well, it might be. So you do have to use this on the back end in Anthropics platform. So you're not using this in Claude AI, FYI, right? So you're not going to go to Claude.ai. You're going to be using Anthropics platform. The good thing is, well, you don't have to have a paid cloud account to do this. You just have to at least have a credit card on file because you will be charged for usage. So yes, you can do this more on the technical side. But the cool thing is if you've used the the GPT builder in chat, GPT, it's kind of like a sort of like a version of that, right? So you can simply chat with Claude to help you build agents in this new cloud managed agents. But you can also go a little bit more technical and under the hood. Right. And the cool thing is it can connect to basically any MCP server. It can connect to anything. Right. So in the same way that you might use Claude code and you might not know how to do any of this coding, but it's using the terminal and connecting to all these API services and doing all these magical things. That's kind of what Claude managed agents looks like inside of Anthropics platform. So I did get to play with it for a little bit, I think on Friday. So I haven't spent multiple hours, but it does seem like a pretty simple way to build agents, but then to have them contained in Anthropics kind of sandbox. You don't have to worry about deploying it out on your own. So the platform handles sandbox code execution, credential management, scope, scoped permissions, checkpointing and end-to-end tracing. Meeting teams can focus on defining the tasks, tools and guardrails, while Anthropics orchestration system manages the tool use context and error recovery. So Claude managed agents also support long running autonomous sessions that persist through disconnections and include multi-agent coordination, allowing one agent to spin up and manage others to parallelize. Sorry, parallelize. That's a hard word to say, right? Complex work. So. Sounds great in theory, right? In it is. However, I will warn you, running parallel agents is great, right? Especially if you're using Claude code or if you're using codex. Just keep in mind, if you are using Claude managed agents, yeah, all those spinning up of sub agents is going to cost you because you're paying via usage, you're paying via tokens, you're paying via the API. So keep that in mind. Sounds great. And it is, right? I've tested it and I instantly had an agent that connected to my email newsletter and all these other services that had MCP data, which is great. So you can just say, hey, I have all these services, go connect to them. It'll bring up an authorization page. You click a couple of things and all of a sudden you have an agent. Let's say there's five pieces of software that you use all the time. And you're like, OK, I could try to piecemeal this together. Or, well, this is where this new release from Anthropic really, really works because not only will it just kind of build it for you. Yes, you do have to authorize the agent, but then you can just run it in the sandbox. But like I said, the cost will add up fairly quickly. All right. A new deal for super intelligence. That is our next story because OpenAI published a 13 page policy document titled Industrial Policy for the Intelligence Age. Ideas to keep people first. So this is what a lot of people are calling the super intelligence new deal. And it outlined how governments should tax, regulate and redistribute wealth from AI as the technology rapidly reshapes the economy. So the blueprint argues that AI progress is accelerating so quickly that the US may need a new social contract comparable to the progressive era or the new deal to address risks like mass job displacement, cyber attacks and social instability. So OpenAI proposes bold new ideas, including a national public wealth fund funded partly by AI companies, taxes on automated labor to replace shrinking payroll taxes and a four day 32 hour work week that shares AI productivity gains with workers. So the document also calls for treating AI access as a basic right for workers and schools, creating containment plans for dangerous autonomous systems and triggering automatic expansions of unemployment and wage support when AI driven disruption hits preset levels. So parts of this, I think we're really good, right? And if you didn't get a chance to read this, we shared this in our newsletter last week, but that's why you should be subscribing to our newsletter. So I think parts of this are great in theory. Many of these things will never see the light of day because many of them require the government to act in some official capacity. And this is coming from someone that used to cover the government as a journalist. Government doesn't work like that, especially today's federal government. I don't think anything of this magnitude. We'll see the the light of legislation in the next, I don't know, three to five years. Right. So what we should really be following is the states. And we will see if states, you know, adopt anything like this. Obviously, I would keep an eye on California, which is where all the big tech companies are mostly all the big tech companies are headquartered. So a couple of things I kind of wanted to point out, right? Like the robot tax, very popular. A lot of people have talked about that. That makes sense. And you do have to, I guess, tip your hat to open AI for saying, like, OK, yeah, like if AI takes all these jobs, we need to have money to help all the humans. And we should be taxing the robots. You know, that makes sense. But then on the other hand, you know, they're essentially saying that AI needs to be deemed a basic right. So, you know, on one hand, they're like, OK, well, this thing that we're selling, you know, we need to call it a basic human right. But at the same time, we're like, we know it's probably going to take a lot of jobs. And so we need to do something about that. So I've talked about this a lot over the course of the last three years. I'm not going to bore you with my hot takes. But, you know, overall, I do think AI is going to change what full time employment means in the US. I think ultimately, AI will replace more full time jobs than it will create. But I do think the future of work is, well, a lot of people that aren't even entrepreneurs, they're going to have multiple knowledge working side hustles. Right. So I don't know if you're a lawyer, maybe you get laid off from your law firm instead of being a full time employee. You might just have 10, you know, very niche lawyer side gigs. Right. Yeah. It's kind of the way I see things. Checking out. But yeah, we'll see. All right. Next piece of AI news. This one was kind of shocking. Yeah, Meta has a new model and it's actually pretty good. Yeah. So Meta announced their new Muse Spark. That's their new AI model after investing a ton of money and a ton of time. We're talking billions of dollars and more than a year. So like I talked about on our Friday features, I did get to sneak this one in on our new Friday features. But, you know, I said it's been a year since Meta released Lama 4 in AI time. That feels like a decade. Right. It seems like almost everyone wrote Meta Lama or sorry, Meta off because they didn't really come up with anything after Lama. But we knew that they had some big shifts internally. And it looks like their first model anyways, fairly impressive. So yeah, the company offered, you know, they had an aqua hire more than $14 billion for Scale AI and its CEO, Alexander Wang. Then the company reportedly offered some engineers paid pay pay packages worth hundreds of millions of dollars to staff the new MSL or the Meta Super Intelligence team. So the Muse Spark is the first model in a series that was known internally first as Avocado. So that was the code name. So if you've been listening to the show, we've been talking about that. And right now it's initially available only on Meta's AI app and their website. And they do have plans to essentially replace the Lama models anywhere with the new Muse Spark. The other thing to keep in mind, well, unlike previous open releases via the Lama series, the new Muse Spark is not open source. So it is closed. It is proprietary. Right now it's only for free, right? So presumably that will change. And right now it is not available via the API, although the team at Meta did say that they will be rolling out the API soon. So according to independent evaluations from artificial analysis, Muse Spark already matches top models from Google, Open AI and in thropic in language, in visual tasks, but falls behind in coding and abstract reasoning, tying for fourth place in the broad AI test. Yeah, I was actually fairly shocked. Right. So if talk about artificial analysis a lot on the show, it's essentially it's kind of like an aggregator, right? So it takes all these different benchmarks and all these different scores from all these different places and gives all the models a score. Right. So right now Google and Open AI are tied. With their respective models. And then in technically second place, you have Claude with Opus 4.6. And now in third, technically, you have Muse Spark, right? Which is pretty impressive. The other thing you have to think, I think people are looking at this a little bit differently, because Meta did say that they've rebuilt this model from the ground up, right? So this is not according to Meta, just a new version of Lama that's been improved upon. This is what Meta says a built from scratch new model. And the fact that it's doing that well already, a just one point behind Claude Opus 4.6 on the artificial analysis and what is maybe even crazier on arena. Right. So we talk about arena formally L.M. arena. So this is the blind taste test. And it's also third right now on L.M. arena, although that could change it any second because it's only by like one point, but regardless, it's a top five model by benchmarks and by user preference. Which if I was putting money on this before and I would have said they were probably going to be in more of the five to eight range. So fairly impressive. And a lot of people were kind of dragging Meta, right? Because they released their benchmarks and they're like, OK, well, Meta released all these benchmarks and they're not even really top on any of them. But when you think about it, this is technically their first model in this series and it's, you know, top two, three, four, depending on what you look at. I don't know. I'm impressed. I've used it. My actual usage is mixed, right? Because I'm a very heavy GPT-54 pro user and I was giving it very complex tasks. You know, there is also a new content. What is it called? It's called contemplating mode, right? Which kind of runs these multiple agents simultaneously. So that was the thing I was like really looking forward to because I'm like, oh, my gosh, this thing runs, you know, 16 agents at a time or something like that. And it's supposed to be comparable to, you know, Gemini DeepThink or OpenAI's GPT-54 pro. To me, I wasn't as impressed with the new contemplating mode, but I was maybe more impressed actually with its coding abilities, its writing abilities. So, yeah, you have a new model to try out at least. All right. Going from a impressive model to a company that is maybe not impressed with its current AI outputs, that is because according to reports, Microsoft is under a co-pilot code red. All right. So and this is according to BNP, Peribus analyst, Stefan Sloinsky, who reported that Microsoft CEO, Sadeen Nadella, has declared a co-pilot code red inside of Microsoft, signaling an all out push to enhance co-pilots performance and user experience. So the urgency comes as investors express frustration over co-pilots limited traction, despite Microsoft's leadership in software in general. So Nadella's initiative reportedly includes the upcoming launch of the E7 suite, which I believe should be here around the beginning of May, with ongoing updates and new features planned throughout the year to accelerate co-pilots adoption and usefulness. So according to Sloinsky, the initial feedback on co-pilot is improving. AI moves too fast to follow, but you're expected to keep up. Otherwise, your career or company might lag behind while AI native competitors leap ahead, but you don't have 10 hours a day to understand it all. That's what I do for you. But after 700 plus episodes of everyday AI, the most common questions I get is, where do I start? That's why we created the Start Here series, an ongoing podcast series of more than a dozen episodes you can listen to in order. It covers the basics for beginners and sharpens the skills of AI champions pushing their companies forward. In the ongoing series, we explain complex trends in simple language that you can turn into action. There's three ways to jump in. Number one, go scroll back to the first one in episode 691. Number two, tap the link in your show notes at any time for the Start Here series, or you can just go to starthearseries.com, which also gives you free access to our inner circle community, where you can connect with other business leaders doing the same. The Start Here series will slow down the pace of AI so you can get ahead. Suggesting Microsoft's renewed focus could pay off as it leads to better user satisfaction and market perception, but the competitive threat from rivals such as Anthropic is a major reason behind the Code Red strategy as Microsoft aims to stay ahead in enterprise AI tools. So Slovinsky also noted that Azure could still outperform expectations due to growing demand for tokens and higher GPU pricing, even if internal usage increases further. Here's the thing with Microsoft, right? It's no secret that the enterprise has been rather frustrated with co-pilot, right? They were one of the first out of the gate, right? You technically had chatGPD first, but I mean, co-pilot was the first like serious enterprise business AI tool. And I think a lot of enterprises who adopted early and invested heavily, right? In 2023 and 2024, maybe they've been disappointed in the last two years or so. As you've seen Google Anthropic and Google Anthropic and Open AI really just take off. However, if I'm Microsoft, I'm not exactly worried, right? They're the only company that has the green flag for the most part across the entire enterprise, right? It's much easier for Microsoft co-pilot to break its way through the enterprise, although obviously Google, Open AI and others have been really cracking that space. But in the end, I'm not super concerned top level. If I'm Microsoft, yes, you got to make co-pilot better. Yes, a lot of people don't enjoy using it. Yes, a lot of co-pilot users are jumping ship specifically to Open AI into Google, but I don't know. Microsoft's a big investor in Anthropic. Microsoft is the biggest single investor in Open AI. So yes, it's bad if they're losing. If they're losing users to Microsoft or sorry, if they're using lose, my gosh, I can't speak today if they are losing users to Open AI or Anthropic. But in the end, they're still just making money off that anyways. So we'll see if this co-pilot code red leads to anything. We did see similar stories earlier this year that, you know, I was going full PM mode, right? Like product manager, he's rolling up his sleeves, sitting down with the product team. So I'm actually in like I told some people this an in-person event last week in Chicago. And I told people this, like I'm actually bullish on Microsoft. I've seen a lot of what they've released the last couple of weeks. Right. They're essentially what they're doing. I'm not going to say they're white labeling a lot of products. Right. But they came out with a version of co-pilot, their co-pilot co-work, which is very similar to Anthropic co-work. It's really good, right? They have their new task feature, which is really good. Similar to some features on Anthropic and Open AI, just schedule tasks. So I think Microsoft has actually been shipping a lot. I think those companies that maybe haven't found that utility in Microsoft co-pilot, it's actually more of a training and education problem versus a model problem, because now you get the best of both Open AI and Anthropic when you're using Microsoft. All right, let's get to some scary stuff happening to Open AI CEO Sam Altman. Yeah, this is shocking about to read about over the weekend. So Open AI CEO Sam Altman's San Francisco home was targeted twice over the past four days, raising concerns about the risks facing tech leaders in the AI sector. So the latest incident happened early Sunday morning when suspects in a car allegedly fired a round of shots at Altman's property before fleeing the scene. So police quickly traced the vehicle using surveillance footage and arrested two suspects later that morning. So officers searching the suspects residents found three firearms and both individually were booked for negligent disarm discharge of a firearm. So this attack followed a Friday morning incidence in which a 20 year old man from Texas allegedly threw a molotov cocktail at Altman's home. So security at Altman's property extinguished the fire from the molotov cocktail and no injuries were reported by either or in either incident. But the two attacks come as Altman has publicly voiced concerns about the societal impact and anxiety surrounding AI, calling it the largest change to society in a long time. So the rapid succession of attacks underscores the growing tensions and security risks for leaders at the forefront of AI development. So Altman did respond in a blog post after the incident on Friday. And he was also critical of a New Yorker article that questioned his trustworthiness, acknowledging the impact of those negative narratives. So Altman did admit past mistakes, including being, you know, conflict diverse and mishandling issues with the Open AI board, but emphasize his commitment to improving Open AI's mission. He called for less dramatic rhetoric in the industry, advocating for broad technology sharing and urging constructive debate to avoid further real world harm. Here's here's the harsh reality, right? I'm going to say this is someone that lives in Chicago. And that's important because I think maybe the majority of our listeners are not from Silicon Valley, right? But I know, you know, there's other popular tech publications where the majority of people are from Silicon Valley. Silicon Valley is a bubble in a bubble, right? I don't quite think that Silicon Valley and all the big AI frontier labs really understand what the rest of truly understand, right? Because I don't think you truly understand unless you lit it. What the rest of the world or the rest of the US feels about AI. And the reality is most people don't want it. Most people don't like it. Most people view AI as a threat. So unfortunately, this is an extremely unfortunate incident that happened. But I think that we're going to continue to see AI leaders from all the big companies. I think this is going to be unfortunately an ongoing issue. They're literal safety, right? Because as people start losing their jobs to AI, right, you can't just get mad at the cloud, right? Unfortunately, it's people like Sam Altman, people like Dario Amodi from Anthropic, people like Sundar Pachai, you know, people like Sadia Nadella. It's the faces of these big, you know, four or five companies, you know, Mark Zuckerberg and Metta as well. These are the people that people are going to be angry at, right? Because unlike, you know, the Internet, there was really no face of the Internet. I guess you could say maybe do gates. But ultimately, the Internet was a very slow change to jobs. It was a slower change to the economy. Yes, you had the dot com boom and bust, but things with AI moving much, much faster. And I don't think that people in Silicon Valley necessarily largely understand how the rest of the US really feels about AI. And yeah, I think that unfortunately, we're going to see ugly incidents and I don't want it to happen, right? And I hope all the leaders of these AI companies stay safe because ultimately, I am very optimistic about AI's future and doing more good than bad. You know, hopefully it's able to cure diseases and do all of these great things. But yes, it's going to cause a lot of unemployment at the same time. And people are going to be mad. So this is terrible. I hope it doesn't happen again. But unfortunately, I do think that the leaders of AI tech companies are going to have to be doubling up their security as the rest of the kind of US finally sees what AI is capable of in terms of job displacement. All right. Last but not least, more scary stuff. A model so scary, Anthropa can't release it. So Anthropa has announced its new mythos preview model, which they say is so powerful at finding software vulnerabilities that the company is keeping it private, raising concerns about both cybersecurity and access to advanced technology. So Anthropa said its new mythos preview model has found thousands of critical vulnerabilities across major operating systems and web browsers, including a 27 year old flaw in open BSD and a 16 year old bug in FFM peg, all that have previously gone undetected. So essentially, they're saying that their new mythos model is a cybersecurity wiz and it's able to find thousands of these, you know, zero day bugs that, you know, millions of human researchers could never find. But the company is not, at least for now, releasing the model publicly. Instead, they have their new project Glasswing, which is essentially a group of companies that they're getting access to mythos to these companies. And they're essentially saying, use this to harden up your software to make, you know, your software better, because when a model like this kind of hits the streets, right, we want these, you know, big tech companies to be safe. We want the technology that people use to not be exploited by a model like mythos or similar. Right. So the company right now is sharing it only with partners such as Apple, AWS, Google and BDM, Microsoft and 40 other organizations as part of Project Glasswing, and that is kind of their defensive cybersecurity initiative. But this move marks the first time in the modern AI era that a major model is being withheld from the general public, but is being released privately due to concerns over its potential misuse, creating a significant knowledge and technology gap between elite companies and the broader public. So yes, there was times early on, right? Like even I remember OpenAI way back in the day, because I was using their, you know, their early GPT, I forgot if I was using GPT2 or GPT3 technology, right? Like back in 2020, I remember there was a time they were like, oh, we're not going to release this model because, you know, it could, you know, write lies about people. And, you know, they eventually released it. It wasn't that they just released it to 40 companies. So there has been time in the past when companies have said something like, oh my gosh, our model is too good. We're not going to release it. But they eventually did release it, right? This one with InPropic. Presumably they'll eventually release a version of Mythos. Maybe it's a stripped down version, but it seems like at least for the short or medium term, for the first time, there's a huge tech divide, right? There is, you know, the democratization of AI may no longer be a thing anymore. Right? So it's like, oh, we had a great run for the last, you know, four or five years when, you know, the Fortune 100 companies, you know, we're using the same thing as, you know, small mom and pop shops. So that time may now be gone with this new Mythos model. So the company claims that Mythos was not intentionally trained to be a cyber threat, but its advanced coding abilities led to the discovery of vulnerabilities that even the top human experts and previous AI tools missed. So what's my take on this? I mean, I did a whole episode, so you can go listen to that 752. So I'm not going to spend too long on it. I mean, part of me, I think, anthropic made the right move here, right? If they are truly actually concerned about this being a model that could be a cyber threat. OK, that's great. To me, I don't know, you know, anthropic has had in its CEO. I've had a lot of, I won't say boy that cried wolf, but they've had a lot of instances in the past where they're really hyping things up, right? They're like, oh, you know, AI is going to take all coding jobs. And then they said, AI is going to take, you know, half of white color jobs. And those things might ultimately come to fruition. I don't know. But to me, it seems like this was a strategic play with Glasswing, right? You get everyone talking, you know, about how that's this new dangerous model, right? And, you know, I don't know. To me, I think anthropic had a huge and embarrassing data leak, right? A couple of weeks ago. And they know that they're going to be going for an IPO here, presumably in quarter three or quarter four, and they need something, right? They need something in between that. You can't have your last big, you know, international splash on the news radar to be, oh, that time you accidentally leaked your source code to your most popular product on the internet, right? That's not a good time. And then like four months later, you know, at least outside of the AI scene, right? Like we're talking about AI, you know, we're talking about anthropic every day. But I'm saying the entire world, right? The entire world was talking about anthropic and that code leak. Anthropic needed something, I think, to divert the attention from, oh, my gosh, we accidentally just leaked the source code to our most popular product, Claude Code, right? And we're getting ready for an IPO here. We need to start to spin up a new narrative. So, you know, now it seems like this new narrative, whether it's 100 percent true, 50 percent true, I don't know, right? I don't know. I would say it's 50 percent true. They are actually concerned about, you know, releasing this publicly because, yeah, it could create a lot of a lot of bad bad actors. We'll just say that, right? With all the software that we use, yet at the same time, I do think this is a little bit of pre-IPO marketing and, you know, just trying to flex on everyone and saying, yeah, look at how good our models are. All right, so that is it for the big stories of the day. But we're going to end or sorry, of the week, we're going to end with our what's new and what's next. So this is a combination of, you know, some leaks, some rumors and just, you know, some pieces of news, some updates that came out this week that, you know, we just didn't give, we didn't have enough time to give full, full attention to all these. So we're going to go quick here. So we're starting Google ads notebooks inside of Gemini with full bi-directional sync to notebook L.M. Yeah. So it's kind of like projects, but it also works with notebook L.M. Pretty cool. So leaks show that Anthropik may be building a lovable, ask full-stap software building program. That would be crazy. Brad Gerstner of Ultimator Capital said that companies are already using OpenAI's Spud model and it arrivals Claude's mythos. OpenAI launched a $100 a month change, BT Pro tier with 10 X codecs access. So yeah, if you didn't want to pay the $200 a month, but you wanted more than the $20 a month pro plan, now you have the mid tier $100 a month pro tier. Apple is testing for premium material, smart glasses design, powered by AI and paired to your iPhone targeting a launch in 2027. Spotify now creates podcast playlists from natural language prompts. Just I don't know, maybe you ask for the best everyday AI episodes. Try that. All right. Alibaba's Happy Horse 1 unexpectedly took the top spot in the AI video arena. Yeah, it looks really good. Better than SeaDance, better than VO3, sorry, VO3-1 at least for now. Speaking of models that made a splash, ZAI released their GLM51, which is not only the new state of the art model for open source, but it also out benched top frontier models on a sweet bench like GPT-54, Opus 46 and Gemini 31 Pro. That's huge, right? An open source model. I mean, you got to have like a super computer thing to actually download this thing, right, but it's open source and it outperformed the big three on sweet bench, which is one of the most popular benchmarks for software engineering. Microsoft updated its co-pilot terms because previously they had that co-pilot was for entertainment purposes only. So yeah, there is some criticism on that. So they changed it. Next, a DC court allowed the Pentagon to blacklist anthropic, but other agencies can still contract with anthropics. So yeah, the ongoing kind of battle might be closer to being closed. We'll see if that gets appealed. Next, according to an Axios report, OpenAI is projecting $100 billion in ad revenue by 2030. Nebius is in talks to acquire AI 21 labs for up to three billion dollars, according to reports. OpenAI is partnered with Upwork so users can hire freelancers directly in chat. BT Golden Sachs, Goldman Sachs came out with a new report that said AI displaced workers face lower earnings and higher unemployment risk for a decade. Ella Marina has released the full history of its AI leaderboards as a public dataset. OpenAI is testing a new image generation model on chat, BT, a BT test in Ella Marina. We talked about them testing it on Ella Marina, but now they're also testing it inside of chat, BT on a BT test. So that would presumably be their new V2 of their image model. Google Workspace launched a feature where Gemini suggest the best meeting times for everyone. I mean, we've been needing that for like 20 years. All right, next, Pico launched an AI self video chat beta where your agent talks, remembers and acts in real time. Google quietly dropped this one. It's called the AI edge eloquent. It is a free offline dictation app on iOS using Gemma models. It's super impressive to FYI. Speaking of Google, they're preparing a jewels V2 coding agent that can set goals and drive improvements without prompting. The Gemini app now lets you create interactive 3D simulations and models inside of the actual chat, which is really cool. Just to visually explain things. Google also expanded its finance tools globally with new AI capabilities. Quad, Co work hit general availability for all paid plans and meta signed a 21 billion dollar deal with core weave to expand AI cloud capacity. Oh, that was a lot. All right, I hope this was helpful. So I got a little tongue tied there. It just happens, right? There's so much going on. Even I struggled to talk about it. So don't spend hours every single day trying to keep up. Join us on Mondays as we bring you the AI news that matters. If you are new or here, right on Wednesdays, we go hands on. We usually do a deep dive on one tool. So make sure to check out today's newsletter. We'll probably do a poll on that. So what do you want to see? And then on Fridays, we do our AI feature Fridays, which is where we usually do a handful of new features that you can start using now. And on Tuesdays and Thursdays, we kind of rotate our shows. So I hope this was helpful. If you're listening on the podcast, do me a favor, leave a review for us. I'd really appreciate that after you subscribe to the podcast. So thank you for tuning in. If you haven't already, please go to your everyday AI dot com. Sign up for the free daily newsletter. Thanks for tuning in. Hope to see you back tomorrow and every day for more everyday AI. Thanks y'all. And that's a wrap for today's edition of everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more magic visit your everyday. Dot com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.