this is the daily tech news for tuesday february 24th 2026 we tell you what you need to know give you the important context and help each other understand and today which is more dangerous ai taking all our jobs or a sub stack post about ai taking all our jobs ding ding ding i'm jason howell i'm tom merit let's start with what you need to know with a big story Yeah, you kind of framed that perfectly. Citrini, Citrini, Citrini? I'm not sure. Research issued a report Monday that described a worst case scenario of successful LLMs pushing white collar workers out of jobs. Wall Street Journal wrote, quote, concerns of hyperscalers overspending are out. Worries of software industry disruption don't go far enough. The global intelligence crisis is about to hit. Dun, dun, dun. The Dow, S&P 500 and NASDAQ all experienced drops between one and two percent. Evercore, is it ISIs? OK, I don't want to read that all four letters together. But ISI's Krishna Guha critiqued the scenario with the following point. Krishna said, it assumes that cost savings will not increase economic activity when all economic history indicates that people saving the money get wealthier and the economic impact balances. It ignores Shumptr's insight, which is that resources released from failing businesses usually create new businesses. The disruption is the gap between those two, not permanent. It assumes wealthy people will actually stop consuming. Quote, even if there are limits to the consumption of current goods and services, new ones will be invented. Yes, the purest case is products or activities that extend a person's healthy life. There is no limit to the amount of healthy life a wealthy person wishes to consume. End quote. And since this scenario does not assume robotics replaces manual work, blue-collar work would also see wage increases and also generate consumption. And finally, it ignores government intervention, no monetary or fiscal policy response, such as raising taxes. And then Financial Times Rob Armstrong says, and I like this, we should probably be more worried that a Substack article can trigger a market route. And so this is all based on that Citrini report. What are your thoughts? Yeah, I saw this report yesterday before we did DTNS, and I almost threw it in the quick hits as sort of an interesting like, oh, you know, somebody did a what if everything went too well, right? Yeah. That was kind of the premise. And some people are slamming Citrini for this. I think Citrini was really saying like, okay, if everybody's most optimistic prediction came true, what do we think would happen? And to me, it was like it wasn't meant to be a prediction or warning. It was sort of a thought experiment. Totally. And they said, we're not trying to be doomerisms, but when you follow certain logic, it goes this way. And then the stock market reacted with a panic, which I think made Krishna Guha say, well, hold on. The scenario isn't actually a simulation. This is a worst case scenario. You have to remember that the government would step in, that economics actually don't work the way they show in this scenario because things do tend to balance out and all of that stuff that you mentioned. So I think it became the big story today because it had such a huge impact. And probably, like you said, the bigger story here is that Substack has enough cachet and enough weight these days that a company riding on Substack can move markets. I think that's the bigger story here. I don't think we learned anything new about what the impacts of AI might or might not be, but we definitely learned that the panic is continuing among investors who don't know what's going to happen because we don't. Nobody does. And people writing on Substack are definitely being taken seriously. Yeah. I mean, even in the report, even at the very top of the report, it calls this a scenario, comma, not a prediction. Really, it's a fictional story. Right. It's a fictional story based somewhat in current facts and kind of going through the exercise of saying, if this, then what could possibly happen? But the reality is, we don't actually know that this is even what would happen. It's just a story. I think it is very interesting, like you say, A, that a sub-stack post could move markets, and B, that we kind of keep coming back here. I feel like the last month, there's been a lot of back and forth. We're going to talk about Anthropic and Cloud Code and some of their news later. That definitely ties into this. And that is based on something. That is based on a market that either doesn't understand the technology or is very uncertain about what the future means in the face of this technology. And I don't know how that goes away or that changes. Yeah. It used to be that technology would spur the market to rise. Like, oh, we've got a new technology. That's going to make everybody richer, right? And maybe you'd pull out of a market sector that was going to be impacted by that, right? When cars came along, buggy whip investments went down, right? But I think it's more a testament to the fact that we live in a time of greater uncertainty about everything, that people are looking more pessimistically at stuff and grasping at straws to find any kind of direction forward to understand what's going to happen next. You know, we have more I don't think we have more wars, but we have more prominent wars than we used to. You know, we've got more turmoil. We've got more people sort of demanding that government fix stuff now than ever before. It's honestly, when COVID happened, I thought, you know, this is the kind of thing that really disrupts the world because it it takes away all of your your safety feelings. Your assumptions. Yeah. And I think we are still working through the the impact of that where people don't realize that they no longer trust certain things because the rug got pulled out from all of us. And this may be one of the knock on effects of that. And I imagine we're going to see more posts like this because everybody has a different view of what this current moment means for one, two, five, ten years down the line. All I know is everybody's reacting to, oh, my goodness, the agents are going to come and they're going to replace us. But I don't know about you, Tom. I've played around with agents in my work and stuff. I'd say it's about 50-50 right now. Those agents aren't nearly as reliable as I would like them to be. And so if that is to be the case, and I'm not saying that it can't be the case, but if that is to be the case, there's a whole lot of cleanup on aisle three as far as I can say. Well, and I think agents could become much better than they are now. And they probably will. But they're not at the moment, right? So now is that the time to panic? And near term may or may not be the time to panic. I do think there's a little bit of forgetting that bell curves are a bell shape. Yeah, right. And LLM shot up really fast unexpectedly because we were at the top of the bell curve But I think we might be getting towards the top or at least we on our way to that top And I more curious what the next development is going to be I think there is a lot that LLMs can still improve on and still do, but we're starting to milk all of the various accelerations that we can get out of them. And I think there's going to be some other kind of technology that is going to have to come along if you're going to spur the kind of advancements that make us fulfill the direst predictions of what might happen. I think Satreity may have been trying to be clickbaity with this. I wouldn't rule that out. But they wouldn't have to have been. They may have just been saying like, oh, this is a fun thought experiment. What would happen if? And we want the reactions like Guha had to point out the weaknesses in it. But you've got to put that out there to get the conversation started. Yeah, that's a fair point. Well, DTNS gets these conversations started all the time. By the way, just to let you know, we have gotten so many good emails over the past week regarding developers and uses of tools and stuff. So if you didn't hear back for us or you don't hear it on the show, do know that we read it and appreciated it. DTNS is made possible by you listening right now. Thank you, Johnny Hernandez, High Tech Oki, and Chris Zaragoza. Thank you, yeah. All right, there's more we need to know today, so let's get to the briefs. AMD has reached a deal with Meta to sell it enough MI450 series chips to power a capacity of six gigawatts. This is over several years, so they'll come in tranches of a gigawatt each. There's also an incentive in this deal to fulfill the orders of up to 160 million shares of AMD at one cent each. That would give Meta about 10% of the company. So, yeah, if you're having a hard time wrapping your head around this, Meta agreed to spend a lot of money to buy a lot of chips up to six gigawatts. And in thanks, AMD said, well, if you do spend that money, we'll give you up to 160 million shares of AMD at one cent each. Now, the deal will only reward Meta with that stock if the stock price reaches $600 and Meta fulfills a certain number of chip orders. It's currently just below $200 a share. So the idea is, Meta, you help us succeed, and then we'll give you a stake in the company. AMD struck a similar deal with OpenAI in October, also promising OpenAI up to 10% stake at AMD. and meta also becomes a launch customer for amd's sixth generation epic cpus that's epyc the first gigawatt worth of chips will ship to meta in the second half of this year and meta also buying tens of billions of dollars worth of chips from nvidia so this is diversifying not switching its chip supplier yeah and notable that this is happening this announcement is happening a day before nvidia's earnings report i think we're expected to hear from nvidia um financials tomorrow and you know when things like that happen it's kind of like yeah that that wasn't just a fluke that did just happen that way amg certainly uh doesn't mind no not at all they're gonna they're gonna ride on that connection yeah yeah although i don't think it's bad news for nvidia do you no no but i think it does show at least a company like meta diversifying itself. There's been this kind of, I feel like the last couple of years, there's been this all roads lead to NVIDIA with its hardware kind of narrative that's been happening. And now we're seeing stories like this, I would imagine, tend to kind of break up that story a little bit to say, hey, even companies like Meta, they have a diverse approach. It's not all about NVIDIA. It's always smart to diversify your suppliers. And also, we keep hearing stories that nobody can make enough chips so it shouldn't be a shocker that people have to get multiple suppliers to give them the chips sure yeah you gotta de-risk yourself somehow so well or you just can't get them nvidia's like yeah we'd love to sell you some but uh we're tapped out that's all that's all we can make and meta's like hey amd you got any chips we can buy so yeah well there you go apple announced that foxconn will begin manufacturing mac minis at a factory in Houston, Texas, starting later this year. Foxconn currently makes servers for Apple at this site as well. Manufacturing of the Mac Mini will continue in Asia with the Houston site satisfying local demand. And then Apple will also source chips from TSMC's facility being built north of Phoenix, Arizona. Amcor is building a chip packaging site nearby to make the chips ready to be put in devices. Mac minis, they're all the rage right now. You know? I know that has probably nothing to do with this, maybe very little to do with this. Yes, exactly. You know, that will make this story hit a little different for some people. You're right. Mac mini still is like, what, 5% of Apple's computer sales. Like most people buy MacBooks. But it's a start, right? It's a way to try something out and see what happens. It's a way for Foxconn to expand its ability and its diversity of supply chain. I was really interested to find out that Amcor is building a chip packaging site because that was part of when you get all these stories about TSMC and others building chips in the US. I'm like, yeah, but then they're just going to have to send them to Southeast Asia to get packaged, right? Well, no, Amcor is building the chip packaging. So now what's going to happen is TSMC is going to make a bunch of chips. Amcor is going to package them. A few of them will go to Foxconn in Houston and the rest will all go to Southeast Asia because they're still building iPhone or even in India. They're still building iPhones and stuff. And TSMC is making iPhone chips at this facility as well. So this is never as simple as like, OK, so now everything is here. It's like it's still a it still makes sense to do things globally, but it also makes sense to have multiple locations so that if one region suffers a weather calamity, for example, you're not tossing out all of the production of the world at once. Yeah, yeah, indeed. Anthropic is launching an enterprise push with Claude Cowork as a system of pre-made department-specific agents with plug-ins aimed at finance, legal, HR, engineering, design teams that can be customized to match your enterprise needs. This is mostly about making these tools easier to deploy inside the enterprise with a system that resembles what ID departments use to deploy software throughout an organizational infrastructure. Launch also includes some new integrations for pulling in additional context from Gmail, DocuSign, and others. That kind of stuff has been missing until now. anthropic also posted yesterday the clod code could monetize old cobalt code uh showing it as a cheaper and faster path for mainframes uh for critical applications in government airlines financial institutions uh there's a shortage of people who know cobalt so this would be a very good thing however ibm stock tumbled 13 percent uh along with that post uh which the register was making fun of because they're like you don't have people to make this this is not necessarily going be bad for ibm yeah but it's but it's the reactionary thing that we were just talking about at the top of the show it's like oh well there must be bad for somebody that anthropic and especially it's anthropic in the story here right you know this is something that's come up i think three or four times now in the last month um you know you'd think maybe wall street would be kind of tired of reacting to every time anthropic has a new uh has a new software announcement i'm sure they're exhausted at reacting at every time i know right will they ever learn their lesson But I mean it is interesting from a SaaS perspective that there a lot of companies It continues to be an interesting story to me anyways that there a lot of companies that have built their businesses around these capabilities Now you got Anthropic opening up that plugin infrastructure inside of Cloud Cowork even further to do these things that at least I know for myself as like an individual random everyday user of these services. There's things like, you know, financial analysis or, you know, these certain tasks that I have in my life that I'm not, I don't necessarily feel particularly adept at, and I don't have subscriptions to that service to do it, but I do have a subscription to Cloud. And if these models get to a point to where this financial modeling or competitive research or whatever these things might be useful for, that's interesting. That's compelling as a user. And then what does that do on the software side? And that's why the markets continue to react. Yeah. Yeah. And it's not lost on me that we're seeing more and more of these kinds of stories about enterprise-level contracts, enterprise-level services. Like you said, SaaS software as a service coming from these companies. I think there is a chance that we see OpenAI, Anthropic, Microsoft, Google replace some traditional SaaS software makers. So the reaction isn't entirely ridiculous. But it's not going to happen now. It's going to happen slowly over the next several years. And they might even adapt too, right? Because everybody's getting on the edge. And it also comes with all these stories of like, these companies can't possibly make money. It's like, well, you seem to think they will because you think they're going to replace all these other companies with software as a service. So you can't have it both ways. It's going to be some percentage of one or the other, right? Yeah, that's a really good point. Well, we can't stop talking about Anthropic, apparently, because Anthropic is alleging that Chinese AI firms DeepSeek, Moonshot AI, and Minimax ran large coordinated distillation attacks on Claude. And if you're wondering what a distillation attack is, it's a way of copying a model's style, its capabilities, all the way that it does things, basically by hammering at the API with just a massive amount of prompts and then training on the responses that you get. So over time, the model that you're putting this information into can kind of learn how that other model thinks. In other words, put that in air quotes. Anthropic says this was done through around 16 million exchanges across 24,000 fraudulent accounts. The traffic was routed through commercial proxy services to avoid blocking and detection. Anthropic says this is an increasing national security risk as well. And that joint action by AI labs, cloud providers and policymakers is needed to manage that threat. And obviously, you know, the political discourse of today makes this a very friendly, like a very convenient thing to happen right now and to lean into, I suppose. I have an issue with Anthropic calling this an attack. To me, it's fraud, right? It's like they are trying to get around terms of service. They are trying to hide who's really doing this. But distillation is a perfectly acceptable way of training and improving a model. So I don't think this is an attack. This is just a way to try to use distillation without the company approving. It's a security risk in the sense that these companies could get better faster and they are a foreign adversary to the United States. But it's not a security risk in the sense of like they broke into Claude and stole stuff. They just used Claude in a way that Anthropic doesn't like in order to improve their models. The method itself is not controversial. And in fact, sometimes companies will overtly cooperate with each other to do distillation improvements of each other's models. So this is getting framed as like an attack. And then that lets people go like, oh, well, you stole all the Internet's data. So I guess it's fair that they steal yours. And I'm like, that's not what's going on here at all. This is not stealing data. And this is using a tool without the permission to use it in the way you're using it. Yeah. And I mean, their terms of service, like I was kind of looking through some of their terms of service, large-scale automated querying designed to replicate a model's capabilities. They note that as abuse, not normal usage. Using fraudulent accounts or proxy networks, that can't be used. So all those things are checking off the boxes. So are you saying that maybe their terms of service are just, I don't know, asking for too much or prohibiting? No, I don't think so. I'm not trying to excuse the behavior. I think that's an important point. I'm saying it's a terms of service violation, which is also not good, but it's not like a security attack. An attack really puts it in big, bold, scary letters. And when Anthropic went and when all these companies went and trained on the open data of the Internet, they didn't violate terms of service. They found available stuff that was available on the Internet for them to access. They argue that their training is a fair use of that. Others say, well, it shouldn't have been. But it was unclear. What's happening here is clear. Like you are misrepresenting yourself to get access to our systems. You're pretending to be something that you're not. So I think you can object to both of them, but I think you have to object for different reasons. Yeah, fair. Well, folks, if you would like an honest review from someone who's actually bought and lived with a piece of technology, then you need to subscribe to Live With It. Sarah Lane hosts a weekly look at tech. I've been on there. Jason's been on there. Bunch of people show up with tech that we're actually using, tech that we buy, not stuff that we get for a week and review. That's useful, those week-long reviews. This is a different kind of show, though. This is saying, hey, here's what I use it for. This is what it's good for over the long term when I'm road testing it. If you would like that kind of perspective, listen to Live With It wherever fine podcasts are found. Or you can watch it to youtube.com slash daily tech news show. All right. Now it's time for some quick headlines. Just kind of the stuff that's good to know, make you look smart at the next dinner table you find yourself sitting at. Yeah. For instance, you could say, you know, Panasonic doesn't make its own TVs anymore because now you know that China's Skyworth is taking over manufacturing and marketing of Panasonic TVs. Panasonic will continue to provide technical expertise and quality assurance. Man, you'd sound so smart if you went to the dinner table. asml is developing a thousand watt three laser euv light source that hits 100,000 tin droplets per second so its scanners can process 330 wafers an hour by 2030 that's a lot of numbers but ultimately what does that mean it's going to cut chip costs it's going to increase how many it can be made as well yeah that will make you sound smart uh discord ended a one month uk test with Peter Thiel's age verification company, Persona. So that's the part that I think is getting lost in a lot of the reactions here. One month test, they ended it after researchers found that its age check code was stored on the open internet, which Persona, by the way, says, yes, we did not consider that to be a vulnerability that it was out there. But the test is over. So Discord is not using persona. So moving on, DJI is asking a U.S. appeals court to overturn the FCC's decision that bans new approvals of its drones. They're arguing that regulators never actually proved any national security risk. Russia has launched a criminal case against Telegram founder Pavel Durov. Remember, France did this as well. Russia accuses him of aiding terrorism after blocking the service along with WhatsApp in the country a few days ago. So apparently, Pavel Durov not giving Vladimir Putin what he wants. Hmm. WhatsApp is, speaking of WhatsApp, is working on adding scheduled messages So it starts with a beta scheduled messages is what it called in quotes option in group ahead of a sorry group info So you'll find it in your info for the group ahead of a wider release. There you go. Honor's upcoming Magic V6 book-style foldable is expected to have the largest battery in a foldable yet at 7,150 milliamp hours, while not being large in construction, still retaining that thin body design. Can't wait to find out more about that. Google and Apple announced that they have begun testing encrypted RCS messaging between their platforms. That is welcome. At long last. Yeah. 9to5 Google found indications in code that Google will soon offer real-time location sharing in FindHub, not just that static link. Okay. UK Information Commissioner's Office, find Reddit 1443 British pound sterling. There we go. Million. No. Okay. Let me say that again, because, wow, I just destroyed that. 14.47 million British pounds sterling. There we go. For failing to implement a meaningful age verification system until July 2025. Music generator producer AI joining Google Labs after they did a bunch of work with musicians like the Chainsmokers and Lecrae to make sure the platform is helpful for musicians. Interesting. A good read on New Atlas talks about scientists in Austria making a QR code so small, you need an electron microscope to read it, and that makes it a useful long-term data storage method. Yeah, it's not good for menu links at restaurants. No. It's a different kind of thing. An iFixit story has a good read about Iowa farmers pushing their legislature to pass a right-to-repair bill that requires fair and reasonable access to repair manuals and parts. And see, if you knew all of those stories and you could recite them at the dinner table, they'd think you're like a walking Wikipedia. Those are the essentials for today. Let's check in with Dan Campos from NTX. Dan tells us about Mexico's use of robot dogs for security. Hello, friends of DTNS. We know that dogs are humanity's best friend, but what about their cybernetic versions? Robot dogs have officially arrived in Guadalupe, Nuevo León, marking a new step in local security efforts. The robotic units were deployed during a rayados match at Estadio BBVA, where the K9X devices conducted preventive patrols at entrance and high traffic areas. They also entered the stadium and inspected vehicles around the venue, working alongside police officers to strengthen the surveillance operations. Authorities describe the deployment as a pilot test ahead of their plan used during the upcoming FIFA World Cup matches as the city prepares to host international visitors and large-scale events. For this and more news, check the latest Noticias de Tecnología Express. Back to you, amigos. Thank you, Dan Campos. Oh, my goodness. Those robot dogs never get any cuter. No, they don't. They're not nearly as cute as Bronson. Yeah, the ones that are covered in fluff at CES are very different than those. Indeed. We end every episode of DTNS with some shared perspectives. First, DJ George, with a three at the end, has an addition to the OpenClaw OAuth bands from Google and Athrompik. Yeah, I wrote this in our Discord. Just listened to yesterday's show, and I would like to add that OpenClaw was allowed to use OpenAI Codex 5.3 OAuth. This means that OpenClaw users can use the Codex 5.3 as primary and not necessarily have to succumb to unsafe models. Just my two cents. Okay. And then David has an argument in favor of forcing developers to use AI tools. Yeah. So last week, we had lots of emails from devs about them finding the value in tools. But we also had an ongoing discussion about how companies kind of force people to use tools and then people don't use tools for the valuable reasons. David wrote, many older devs who were set in their ways played with the tools that they had given them at his workplace for half an hour. And before they fully understood how to use them, decided that they did not like them. Some other devs did not fully recognize that using AI tools is an iterative back and forth conversation. The tool will not always be perfect the first time. And because they did not recognize this, they gave up the first time it made a mistake. We organized sessions, documentation, and seminars, but none of them really moved the needle. I accept that not all developers will find these tools useful, but we are considering forcing every one of them to use them at least for a set period of time so they will learn how to use them. If after this period they decide that it's not for them, that's fine. This might be the reason why these companies are forcing their devs to use the tools, the people who analyzed the tools truly believe that they are useful, but we just need to get everyone to give them a fair shake. I think that's totally fair. Although I'm sure some people will still feel like they don't want to be forced to use a tool. But I do think that there is something about being encouraged. I realize we're using the word force, and maybe that's the right word, versus encouraged. But being encouraged to use something that they don't immediately see use for, they don't immediately see the benefit of or whatever, because as with everything, especially with technology, sometimes it takes a little time before you kind of get it or before you kind of find the thing that it is good for, or before you just know enough about it that you can make an informed decision and say, you know what, I've tried that. I didn't like it. That's why I choose to do this instead. Yeah. No, I think David brought up a really good point, and I'm glad he wrote this in because I get why there's a reasonable frustration on the other side of this issue. I still don't think it justifies people like Accenture saying the amount of time you use a tool will determine whether you're eligible for a promotion or not. I think that is too blunt of an instrument. I like what David's saying is force them to use it for a set period of time. And then after that, if they still don't want to use it, fine. But even then I'm like, will they still get it? To me, there are two things you need to do with any tool. One is make sure that people are properly trained, which I know David says they did, but maybe go back and look at like, why didn't it catch on? Why didn't people realize that it's a back and forth? Is there something in the training that could have helped them understand that better? And the other is results. If you have a tool that is optional to use and a bunch of people are using it, it's different if nobody, if nobody will use it, that's a different situation. But if some people are using it and they're hitting their marks and getting things done better. And you say like, great, now the deadlines are faster because these tools make it faster. The people who don't use the tools will need to brush up on the tools to be able to hit those deadlines. So I think I am very much in favor of judging by results, training well and judging by results rather than forcing people to do something. But I get what David's saying is like, maybe there does need to be a set period of time where you're like, no, just keep using it. Try the training we gave. And I think there's a way to foster that that's not randomly mandating, like you're forced to use it because then people won't learn to use it well if they're feeling forced. They need to be encouraged, like you were saying, Jason. Yeah, absolutely agree. Well, thank you for this, David. That was a really good email. I appreciate you sending that. If you've got insight into a story like David did, share it with us feedback at dailytechnewsshow.com. Yes, thank you to Dan Campos. Thank you to DJ George and David for contributing to today's show. We couldn't do it without you. And thank you for being along for Daily Tech News Show. You can keep us in business by becoming a patron at patreon.com slash DTNS. We'll see you tomorrow. helping each other understand. Diamond Club hopes you have enjoyed this program.