Prime Video offers the best in entertainment. This should be fun. Jason Momoa and Dave Bautista go completely down in the hilarious new action film The Wrecking Crew. Inbegrepen by Prime. Yeah, I'm pumped. Find the new Game of Thrones series A Night of the Seven Kingdoms. Based on the bestseller of George R.R. Martin. Look by being a member of HBO Max. So be brave, be just. So whatever you want to find, Prime Video. Here you look at everything. Abonnement is revised. In-house conferencing is 18+. Global shifts are redefining business. How can you stay ahead? Find the answers on our Think Ahead podcast. Humans make mistakes. The generative AI can outperform and reach superhuman levels of performance. We get ourselves tied up in knots about, oh, we can't analyze the algorithm, when what we really need to do is analyze the output and compare it to how good humans would be at that task. Stay informed and stay ahead with the Think Ahead podcast from London Business School. This is the Daily Tech News for Friday, February 20th, 2026. We tell you what you need to know, give you some important context, and really, honestly, we try our hardest to help all of us understand all this. Well, today, Andy Beach explains how MGM is using AI to help producers, and we finish our week-long discussion of how developers should actually be allowed to use coding tools. I'm Tom Merritt. I'm Hoon Twit now. Let's start with what you need to know with the big story. Now, I want to talk mostly about Meta's decision to take Horizon Worlds out of its VR headset, which seems odd, but I think I kind of get what they're up to. But since we're talking Meta, let's note a couple of other stories here. Meta is among several companies that have restricted employees from using OpenClaw for work due to security concerns. There's a good story on that over at Wired. For the second year in a row, Meta reduced the stock options that employees will receive, this time by about 5% for some employees. If you read closely, though, they are raising the rewards for quote-unquote top performers. So the overall compensation budget went up. At first I saw this, I thought, oh, this is them because they're spending so much money they have to cut back. It doesn't seem like that because they're also spending more money on compensation. So if you see that headline, don't just settle on the headline. There's more to it than that. And then there's this. A post from Meta's VP of content at Reality Lab, Samantha Ryan, called Our Renewed Focus in 2026. Talks about improvements to developer platforms for its VR products. That's all very interesting if you're into that especially. But it also says that Horizon Worlds, Horizon Worlds is that virtual chat room, the meeting place where I don't think you have legs. Maybe you have legs now. Maybe you have partial legs, but you can sort of walk around in a virtual space, talk to people, sit down in a room, have a meeting, something like that. It is something that someday might be called the metaverse. That's going mobile only. Yeah, in Ryan's words, we're explicitly separating our Quest VR platform from our worlds platform in order to create more space for both products to grow. Why, you might ask? I'll just quote Ryan again. We've grown mobile-only worlds from zero to 2,000 plus over the last year. Top creators like Do Big Studio have joined the program, bringing with them engaging worlds like Steal a Brain Rot. We grew mobile monthly active users over four times in 2025, and creators are finding success on the platform. So to me, that says nobody's using it on VR, but a bunch of people are using it on mobile. And in Gadgets, Ian Carlos Campbell notes that this puts Horizon Worlds more in competition with Roblox and Fortnite rather than with Second Life. Gwen, what do you make of this? I can only imagine that, as you said, that they're not seeing enough adoption on the VR platform and are trying to make something of the malware and the active users they are getting on mobile. I find it also very odd because it's a mobile phone and we're talking about virtual reality. I mean, there are phones out there, I assume. And obviously because they do have some – they're seeing some adoption and some use. It must be usable to some degree. but wow, y'all like I, it's, it's just, I'm just really, really curious. Like, I feel like there's a lot of questions I have, like, what's the actual user experience life? What's the, what is like the, I guess, phone demographics. Like, is this like, what countries is this, like what countries is this working in? Like, is it mostly like, you know, countries where say like top end flagship iPhone-y type phones are like the biggest, I'm just, I have so many questions about how this is going to work. and it does seem like a very odd thing. I mean, it's weird because I feel like as a software engineer for many, many years now, I feel like I've been on both sides of this where you have a product where it has some adoption, but it doesn't seem generally like something that has a lot of growth in it. Like there's going to be some people who are diehard folks, but that the ROI is a bit low. Or in this case, I can only assume there are some severe technical, I mean, there's some serious technical hurdles to this getting wide adoption, but maybe they don't need that. And maybe this is just, I don't know, like, um, I don't, it's just very odd to me. I was very confused when I saw the headline. Once I read into it, I started to understand like, okay, so you have users for whatever reason, and you're just going where the users are, uh, and cutting your losses on the VR platform. But I'm still in my head, I'm thinking of Horizon Worlds as the thing Zuckerberg always showed us where he's like, look, I'm walking around and talking to people. I'm like, how does that work on the phone? And it wasn't until I read Campbell's note that's like, oh, this is Roblox. This is Fortnite. This is Minecraft. You're not walking around like you do when you have the VR headset on. There's just a virtual world you play with on your phone. And once I start to think about it that way, I'm like, OK, I guess it's very different than the VR platform version, but it's the same engine underneath that can let you do stuff. So sure. Yeah. I mean, Fortnite, incredibly popular. Roblox, incredibly popular. So I'm not guessing that Horizon Worlds will immediately become relatively popular, but I suppose it has a chance if you've got developers who are willing to create stuff for it. As a burgeoning middle-aged person that is not familiar with these things, does Roblox and or Fortnite have a mobile? They do, right? Yeah, yeah, yeah. No, they do. And you'd be forgiven for forgetting that because they're not in the app stores usually because of the spat that they're having with Apple and Google about that. But yeah, they have a very popular mobile user base for that stuff. All right. Well, then I guess I'm less perplexed. probably given that they have very smart engineers working on optimizing those kinds of things. I do think it's very interesting. I also have this feeling that I just have this impression of meta as being one that not, I don't want to call out, call it out as sunk cost fallacy, but they really, really want to hold onto their ideas. Like they feel like they have really good ideas and they want them basically prying their ideas out of their hands when, when, you know, when the horse has been kicked enough, kind of, I'm mixing a lot of metaphors there, but yeah, this is, I get that flavor a little bit. It's the sunk cost horse. Sunk cost horse. The poor horse. It's like Artax in the swamp of sadness. Yeah. Anyway. Don't abuse the sunk cost horse, Metta. All right. Well, on that note, we want to take time to thank you guys because DTNest is made possible by you, the listener. We want to thank you, AB Puppy, Dale McCauley, and Matt Zaglin for keeping us out of the swamp of sadness by supporting us. Yay! Yay! Global shifts are redefining business. How can you stay ahead? Find the answers on our Think Ahead podcast. Humans make mistakes. The generative AI can outperform and reach superhuman levels of performance. We get ourselves tied up in knots about, oh, we can't analyze the algorithm when what we really need to do is analyze the output and compare it to how good humans would be at that task. Stay informed and stay ahead with the Think Ahead podcast from London Business School. There are a bunch of other stories to talk about today, so let's get right to the briefs. All right, well, Samsung re-announced, or announced rather a reboot of its Bixby virtual assistant on January 20th, then deleted the post. It now looks like maybe somebody just got the publish date wrong because on February 20th, a month later, Samsung has now posted that it's rebooting Bixby. The new Bixby will let you use natural language to control your device without needing to know exact setting names or commands. So Samsung's example is that instead of having to say, turn on the keep screen. Hold on. See, even when it's written in front of you, it's hard to do it. It's hard, yeah. So instead of you having to say, turn on the keep screen on while viewing setting. Gosh, yes. You can just say, as a normal human being would, I don't want the stream to time out while I'm still looking at it. So as you can see, a very good improvement. The new Bixby is available in the One UI 8.5 beta for the Galaxy S25 in Germany, India, Korea, Poland, the UK, and the US. Yeah I think this is almost the opposite example of what going on with Meta where they won stop riding around on their son ghost horse instead Samsung like oh let actually make Bixby usable Otherwise get rid of it And this sounds usable Now granted they could have given Gemini that kind of role. So they wouldn't have had to keep Bixby around. There's a little bit of Samsung's, you know, being territorial there. But if it works as advertised, I think that's great. Yeah, 100%. I would prefer to have said the second thing I said rather than the first thing I said. Yeah, no, I think that was a great example of why, you know, it's almost useless to have to memorize that setting to tell it versus just being able to describe like, I want this to happen. And if, again, if it works, then that's great. Financial Times reports that two minor Amazon web service outages were the result of autonomous agents making mistakes. And I don't know exactly how much to make of this because these are very minor outages. And Amazon has a logical defense of them. Here's what went on. A 13-hour interruption in December happened to a tool that was only available in mainland China that helped customers figure out how much it would cost if they wanted to do different things. So this isn't a disastrous outage. It happened, though, because the Kiro, K-I-R-O, the Kiro AI coding agent, determined the best course of action in the course of carrying out some instructions was to delete and recreate the environment. which, as you might imagine, caused some problems. Amazon posted an internal review of the outage. It did not make it public. Several Amazon employees confirmed to the Financial Times that this was the second outage in the past few months, the other one not being a consumer-facing tool at all. One employee said to the Financial Times, the engineers let the AI, or the agent, resolve an issue without intervention. The outages were small, but entirely foreseeable. Now, Financial Times reached out to Amazon, which acknowledged this and said it was a coincidence that AI tools were involved, that the same issue could have occurred with any developer tool or manual action. In fact, the changes Kira made should have required authorization from a second person. But the actual human using Kira accidentally had broader permissions than they should have. That is unrelated to the use of Kira. Amazon said this was a user access control issue, not an AI autonomy issue, at least in that case. And finally, this line from the FT story I'm going to tuck away for later in the show. They added that the company had set a target for 80% of developers to use AI for coding tasks at least once a week and were closely tracking adoption. So the reason they were using Cura for this, because they needed to meet this 80% thing. But let's focus on the outages. is when do you think this is no different than human error as amazon is trying to argue and we we just sort of notice it before because it's an autonomous tool kind of like with autonomous car accidents there's many fewer of them but we notice them more because it's an autonomous car okay so taking the the last part of that out with in regards to the eye agent coding i think it is fair to make an argument that this is the kind of thing and i mean i personally have seen similar things happen when you roll something like when something happens like someone has too many permissions that they don't have they don't realize it and they do something that happens i have personally um i am a good engineer but i personally have also done things that result in what can be considered outages on different services throughout my career so yes it is i commented out an entire website once by mistake so okay so yeah so i mean these things happen And I would say that even in the presence of other human beings – this could happen in the presence of human beings and not – of only human beings and not just an AI agent. I do think that we are noticing it because it is autonomous. And I think the problem with AI and agents is scale in that if you just have humans doing this on this, there's like an upper bound on how many of these problems can happen. But when we talk about scale and maybe, again, I might not be understanding the exact, I guess, technical parameters of this outage. But in general, the problem often tends to be scale. And like in this case, right, there was some need for human intervention. Now, if people just start ramping up AI and people start kind of expecting higher outputs of AI agents that are beyond, you know, an individual human's capacity to, number one, create and number two, review, that's where it gets a little worrying. Because I think in a lot of ways, a lot of this has been sold as AI can do X and Y and Z better than human beings, which I think gives it a sense of inflated trust, at least at this moment in time of the technology. And so people either – and I'm trying not to get to the part where – the last part of the story where there's inherent pressure. But also just there's a trust there because we generally have this confidence in the tool, which I personally feel is overinflated. So that's kind of my issues like scale and trust. Whereas if we were probably had more checks in place, like that would be less scary. But yeah, I think there is an element of the autonomous agent in here, but not the specific problem that happened. I kind of think everybody's right. Yeah. I think the engineer was using an agent when they didn't need to because of this perverse incentive. Yes. I think that Amazon's right that this could have happened anyway if they weren't using an agent. But I think all of the engineers that talked to FT were right. They were like, yeah, but they weren't taking the precautions they needed to stop the agent from doing that. And so if it would have happened without the agent's use, it would have been a different mistake than the one that caused it, if that makes sense. Yeah, no, I 100% agree. Yeah. All right. Well, Decoder has a good write-up of a Microsoft report called Media Integrity and Authentication, Status, Directions, and Futures. This gives details about three methods it has developed to identify generated content. None of them is reliable on its own, and 20 out of 60 combinations achieved high confidence levels. And reversal attacks can make fakes look real and real content look fake. And these generally take the form of adding noise to the data, which throws off the frequency distribution that detectors use to evaluate it. Yeah, I mean, if you want to be a glass is a third full person, 20 out of 60 is not, you know, at least some of them have high confidence levels. But really the upshot of this report was even when you get the 20 out of 60 that work, you can then do other things to combat them. So we are not yet there. I think we'll get there. I don't know when, but we are not yet there on being able to detect generated content on a regular basis. I mean, I think we as humans are actually better at detecting generated content than the tools are, frankly. That's one of our current advantages. And that's so interesting to me, too, is like and again, like a big still a big pro for human intervention is that there is something about our brains and the wiring. Just like we can notice like that uncanny valleyness that we get with certain generated. It's just something inherent in our brains and which are like, you know, quite sophisticated, whereas like, you know, these are still algorithms that are very confined to certain parameters. So I find hope that you still need me, guys, at some point in the process. You're so right. when would you like me to give you some other examples of where hope was necessary yeah uh if you recall last autumn we talked about a case against prominent apple leak publisher john prosser uh the scenario was that someone named michael ramachotti used a now fired apple employee ethan lipinski's iphone to secretly make a facetime call with prosser to show him details of the liquid glass redesign of iOS 26 before it was made public. Prosser tried to say he didn't realize that it was coming from a confidential source, etc., etc. But in October, the court awarded a default judgment to Apple because of a failure to contest the allegations. Prosser claims he has been in contact with Apple, and now all parties to the case have issued a joint status report indicating that Prosser is being deposed. So they're actually talking to him on the record for the purposes of the case. The case may now resume to determine the scope of what confidential information was accessed, as well as damages and remedies. It just takes a while to explain this. But the upshot is that Prosser will be appearing in court. He's not evading this. But it's a big deal because it's going to set the tone of what people who report on leaks are going to risk when they do that. Yeah, 100%. I mean, I guess kind of sort of going with the theme there, like from your perspective, is there, are there any like, I guess, perverse, like, what did you say like earlier? Perverse incentives. Perverse incentives for like, I guess, overlooking, like, I don't know, or just quickly, like, I guess not vetting sources properly and things like that in this particular arena of reporting. No, exactly. And we actually talked about this. I remember us talking about this in October. Yeah, we think we did actually, yeah. About, you know, like what are the journalistic standards and what are you allowed to do in protecting sources and all that. And Prosser says he followed them because he didn't realize what Ramachadi was doing. And that's what the court is going to try to determine in assessing this. But because he didn't contest it, I think he's going to end up being guilty to some extent. They might settle out of court, which honestly would probably be the best for other people who report on stuff like this if they did. Well, folks, if you want honest reviews from people who actually buy and live with products, you should check out Live With It. Live With It is a show hosted by Sarah Lane. Often she talk about her own gear as she is this week talking about the Fi Pet Tracker that she used on her dog Rex and why maybe it wasn great for her But here who it could possibly be good for But you getting her own experience from it We also had Rob Dunwood talking about the Sony camera that he's had for like 10 years last week. Get Live With It in your life. You can subscribe to it wherever podcasts are found or watch it at youtube.com slash daily tech news show. global shifts are redefining business how can you stay ahead find the answers on our think ahead podcast humans make mistakes the generative ai can outperform and reach superhuman levels of performance we get ourselves tied up in knots about oh we can't analyze the algorithm when what we really need to do is analyze the output and compare it to how good humans would be at that task. Stay informed and stay ahead with the Think Ahead podcast from London Business School. Let's get to some quick headlines that are just the kind of thing that are going to make you look smart if you know about them. Well, at India's AI Summit 2026, you know, the one we talked about yesterday with Handhold Gate, OpenAI announced that 50% of chat GPT queries in India are sent by people between the ages of 18 and 24, and 80% of its users in the country were younger than 30. People in India use codecs three times more than the global median. Also at HandGate Summit, technology advisor Michael Kratzios said on Friday, this is U.S. technology advisor, government technology advisor, said, We totally reject global governance of AI. AI adoption cannot lead to a brighter future if it is subject to bureaucracies and centralized control. That's what he said. WhatsApp is rolling out group message history, meaning new entrants can see 25 to 100 recent messages, depending on the group's settings. Two former Google engineers and one of their husbands, all Iranian nationals, were charged with stealing trade secrets related to Tensor processors and Pixel phones. If guilty, they would face up to 20 years in prison. Los Angeles County is suing Roblox for unfair and deceptive business practices, alleging that the platform, quote, failed to protect children from predatory behavior. And in some not weird court case news, Google's Android app ecosystem safety report says last year the company rejected 1.75 million Android apps that violated policies and blocked more than 80,000 developer accounts. But that's down from 2024 when it was 2.36 million apps and 158,000 accounts. So fewer people targeting the Google Play Store, I guess. All right. Right. OK, so right of snap is positioning itself to push augmented reality glasses. Scott Myers, its SVP of specs, has stepped down after six years with the company. And the information has a good read on OpenAI's 200 plus devices team. That's 200 plus people, not 200 plus devices. The team is the devices team. Working on a smart speaker, smart glasses and a smart lamp, according to the information. And finally, Wired has a good read from Maxwell Zeff about how Perplexity's announcement that it's moving away from ads is an indication that it's going to shift away from consumer to be focused on enterprise. That's intriguing. Okay, so Perplexity may be becoming more of an enterprise company. All right, those are the essentials for today. Let's dive a little deeper. All right, well, Amazon says it wants to use AI to speed up TV and film production. Andy Beach explains why this isn't about scripts. It's about infrastructure. Andy Beach, welcome back to the show. Tom, good to see you as always. Yes. So Amazon, doing the thing that Hollywood gets very upset about, talking about AI along with film production and also TV production here. But we're not talking about generating characters or having an LLM spit out a script. What are we talking about with this? Yeah, they, you know, the announcement, the headline almost does a misservice, I think, in some ways, because it puts it out there squarely as AI in film production. But what they really are talking about as you go through the piece is automating a lot of the back office pieces. There's a big piece of it. There's a through line and a business angle inside of this article that says that ideas don't make it to the screen because studios can't scale the ideas fast enough. And so we're going to automate pieces of the pipeline in order to make it faster for Amazon to get content from a script to a deliverable that is viewable on your TV. So that when you say the back end, you're talking about things like accounting and scheduling and logistics and budgeting and things that the actual producers involve themselves with more than the directors and the actors. The coordination of the work is a huge part of it. And that's something that an AI in theory is good at helping with and assisting. Absolutely. But yeah, the budgeting, the accounting, scheduling is all pieces of it. And then post-production processes itself is its own art when you get a piece into reviews and you're waiting on sign-off from a variety of different people. So coordinating those tasks is part of what takes so long in production cycles. And it is their hope that they can speed that up. So it's like Calendly, but times a thousand. Yeah, in many ways. And, you know, I think the thing when I I think there's also a thing that Hollywood hates to acknowledge, which is they already use a lot of AI in production already. And I think this acknowledges to a certain degree that there is that, yes, this is in place. We're going to be adding more of it. But it is really around the infrastructure and the plumbing and the pipes so that the people aren't having to do that part of it. they're working on the actual creative and they're telling the actual story pieces. And then we're using the computers to speed up the time for approvals around that. So you can get the project done faster, not because you shot things faster, but you didn't have to wait in between the creative processes. Am I getting that right? Yeah, absolutely. I think there's a variety of ways inside of the production bubble that it can help because there is often a requirement, particularly when you're talking about sort of a true Hollywood production, there are, you know, people who are literally just worried about the audio background trails for how sound effects will sound. And being able to coordinate the scheduling of that with the people who are working on the color look and feel simultaneously so that you can bring the two together and get overall sign off and approval versus incremental approvals at each stage will help speed up the process. And it's still using the humans to do the approvals. It's just so how does it actually speed it up? Uh, because today the, the scheduling of these, all of these various parts, because you literally will have hundreds of people working on it. Uh, you work on a production doesn't happen in the same linear fashion as what we watch. It's, it's very much out of sequence and out of production. And so if it can optimize the schedule for when different parts need to be done, then you can coordinate larger sign-off blocks of, of the different pieces that are going through the pipe together versus having to have a producer sign off on just sound effects without knowing what the picture that it's going to go to is, which takes an entirely other set of sign offs at some stage. Part of it is just that arrangement of when should things be done based on the time that's allowed and the necessary speed to market. I think there's a secondary and ancillary, which got talked a little bit about in this, but not as much, which is we're thinking differently about the, you know, the devices that we watch TV on today, obviously as well. And the editorially, the look and feel of them will be different from it. It's not as, it's not just as simple as, oh, it's on my phone. So it's a vertical video this time. It may literally be a shorter sequence and a shorter shot. So I think as they get this production pipeline more automated, they will also be able to automate different outputs for different devices and different viewing habits that we may have. In other words, there might be a version of a Amazon Prime TV show that I watch sitting on my couch on TV, but there may be a slightly shorter, different edit of it that I can watch on my phone while I'm walking that is a little more audio heavy and less visual heavy because it knows I can't be watching the phone as much. And that is a mixture of both creative decisions and just editorial decisions that can be automated. So it is something that could exist if automation helped the editors get through that process faster. That is fascinating. Yeah. Because it changes not just the schedule, It changes the possibilities, you know. Yeah, yeah, yeah. Well, and I think Amazon is very good at building something for itself that it can then sell to other people. So I'm sure that's part of this process as well. I mean, absolutely. You know, Amazon years ago took a big bet on distribution of content by buying a compression company and making it the backbone of media technology through Amazon Web Services. It worth reminding people that AWS the cloud of Amazon is a different part of the business than Amazon Prime Video and Amazon itself the retailer But they certainly all work in conjunction together and they have an easier time taking a dependency on that backbone to do this But AWS will absolutely be able to take what they create for this tech stack and hopefully sell it to Hollywood and production companies and advertisers and others who want to make similar things. No question. Well, Andy, thank you for letting us depend on you to give the backbone to the story. I appreciate it. Where could folks find more of what you do? I love to write about media and AI technology and where they intersect. I do that on my sub stack, which is called engines of change dot AI. Thanks, man. Thank you, sir. Yeah. Another example, I think, of using a tool to do something that it's good at instead of just sticking it in and saying, hey, it might be good at this. Yeah. And also the fact that, you know, there it's like the ways that we often see like AI and generative AI in particular kind of advertising users are more let's call them flashier, flashier things like generating content and things like that. And so, and like, we are obviously all to some degree uncomfortable with the implications on that, on like kind of especially artistic endeavors. But the less flashy stuff, like reducing the bureaucracy, reducing the overhead is what seems to be the place where it does the most good and can balance out with the need for human intervention. And also, you know, human creativity, which is, I feel like what I would still like in my movies and film. But it doesn't sell as well. And so we have like, again, it does feel almost again like an idea of like inverse incentives where you want to promote the thing that probably has the least positive and most negative impact as opposed to the boring thing. It also doesn't get as many clicks when you cover it. It really doesn't. It really, really doesn't. And that's like that's like that again, that inverse that inverse relationship. Well, we end every episode of DTNS with some shared perspective. And this week, we've sort of had, you know, an ongoing conversation about the proper use of coding tools. Andrew in Colorado weighs in to close out this week's thread. Yes, Andrew writes, Jay's email about teaching a coding assistant about a code base and getting a working enhancement done quickly was great. It's the kind of thing I think of as not previously possible and not because it wasn't technically possible, but because the corporate economics of both getting the idea accepted slash prioritized and then doing the implementation probably wouldn't have lined up previously. Never mind that Jay's detailed documentation also hadn't been prioritized yet and was a necessary speed up along the way. I'm making a bit of a leap here, but I bet the biggest time saving wasn't the coding time, even though he did save a week, but rather the corporate overhead that comes from needing to vet and schedule a new idea for the system. The more we can push coding assistants to reduce meeting time and not experimentation time, the better. Discussions over working systems are much more productive than theoretical ones, especially for junior devs who might have a good idea, but not the words to describe it beforehand. Andrew, you're my dude. I feel like there's so many things that you've touched on here. And I mean, even for me, and I've mentioned this before, even the most pro-AI folks that I've talked to and even management to product, I think the ones that I'm most comfortable collaborating with are people that understand that, yes, it's the biggest thing. Coding is not that much of a mid-to... I'm sorry, I've said this so many times. The thing that sucks my time is meetings. And again, as Andrew and Jay are talking about here, is some of that overhead of needing to kind of, in a lot of places, when you have an idea or you have something you want to propose, you have to write up a thing about it. It has to go through a bunch of communication, like a lot of communication channels that get approved. Not unlike what Andy was talking about in that piece right there. It's the same thing. And that is where the best time-saving is. Like even this week, I spent so much of my time in a productive way, in meetings, discussing things, having to like reference like docs and stuff like that, writing documentation. And I understand, agree with Andrew here. And I think that's what really got me about Jay's email. I wasn't able to articulate it yesterday, but yeah, he's doing the things that are in fact, very time consuming, but that are important. And that honestly, like, unless you're at a place where they demand, and that's not every place that they demand detailed documentation for approval. If that's not like part of the culture and part of the demand a lot of that stuff isn't done which is its own like can of worms when you don't have documentation when you don't have like explanations of like reasoning and and justifications like that that's its own problem and a lot of times especially you know when you're in a high like pressure situation where you just want to get the thing out that stuff is very easily put to the side and which is the important which in my opinion is just as important as the coding uh if sometimes you like also with jay's example almost like almost more important to be able to think things through So I think there's all these like wonderful threads tying together here. Yeah, I would love less meeting time and less writing document time and more time to actually experiment, which would give me more time to figure out how AI coding works. Yeah. Just saying. A fewer random like 80% of your work should include a tool. Like that just makes people use the tool for the wrong reason because they need to check the list. Yeah. I mean that's the thing though that's driving me nuts and I already mentioned it before. Like the incentive now is just assuming that coding is going to be the number one productivity impact and that we measure that impact by literally how much you're logged in, how many times you logged in, how many code. And look, again, as I mentioned, like there's a lot of pressure from anyone with any job to meet performance criteria to keep that job. All you're doing is incentivizing people to either use it as much but as possibly shallowly as possible because that's how you get those numbers up. This is why, like, for example, I've had discussions pre-AI era about, you know, why counting the number of lines of code is actually a pretty bad metric and why if you put these incentives for people in place, they are going to maximize those incentives. They're going to make bloated code. Yeah, yeah. Like, exactly. Like, you could just copy paste the same bit of code 30 times instead of writing something more efficient. The incentives are, again, inverse incentives. And that's just my – I think inverse incentives should go on a T-shirt. It's not very, it's not very catchy. Like just a strikeout symbol. Yeah. And I mean, I really, I appreciate Andrew. Also you bringing up junior devs and I know I've harped on this before as well. It seems that a lot of also part of the assumption is that a lot of these discussions focus to me on a situation where you have someone like Jay, like Andrew, like myself, who are experienced developers that have a good notion of what is a good process. What is a good architecture? and you kind of assume that we have that. And that way we can be the ones to kind of orchestrate and conduct the AI. But we've done that for the benefit of having years of doing the thing manually and learning, like having hands-on experience. And I just want to go touch on like, again, what Andrew writes is that we need to kind of give Junior Desk coming up in this environment a way to gain that critical thinking skills. And yeah, and I just think like, yeah, we're not thinking about that part either because eventually the rest of us are going to retire and you're going to need someone who can have that experience and that knowledge to step in. Well, smart companies, listen up. This is the way you get an advantage on the dumb companies that are doing the dumb incentives. So just real quick as a chaser for the coding discussion, in case you want a different perspective, Brandon wrote in about an unsettling change to his local Burger King. The Burger King near my house has started using an LLM based program in the drive-thru. It was overly cheery and pretty good at taking my order when I first encountered it. Went back a month later and now it asks, do you have room for insert sale item every time I order an item? With a family of four, it is asking me if I want something I do not four times in a single order. It also does it really quickly after taking the order for the item as if to not give you a chance to process what it's saying. I don't like it. again inverse incentives on sticking ai somewhere it doesn't really need to be done indeed uh well what are you thinking about we would love to hear from you if you have any insight if you have funny like anecdotes to share share it with us at feedback at dailytechnewsshow.com thanks to andy beach brandon and andrew for contributing to today's show thank you for being along for daily tech news show you folks keep us in business become a patreon patreon.com slash DTNS. This week's episodes of Daily Tech News Show were created by the following people. Host, producer, and writer, Tom Merritt. Host and writer, Jason Howell. Co-host, Sarah Lane. Co-host, Rob Dunwood. Co-host, Wen Tui Dao. Producer, Anthony Lemos. Producer, Roger Chang. Editor, Hammond Chamberlain. Editor, Victor Bognat. Contributing producers, Kevin Tech, Noel Cow, and Brandon Richards. Science correspondent, Dr. Nikki Ackermans. social media producer and moderator Zoe Detterding. Our mods, Beatmaster, WSGottas1, BioCow, Captain Kipper, Steve Guadarama, Paul Reese, Matthew J. Stevens, a.k.a. Gadget Virtuoso, and J.D. Galloway. Mod and video hosting by Dan Christensen. Music provided by Martin Bell and Dan Luters. Art by Len Peralta. Acast ad support from Tatiana Matias. Patreon support from Bobby Wagner. Our guest this week was Andy Beach. And thanks to all our patrons who make the show possible. The DTNS family of podcasts. Helping each other understand. Diamond Club hopes you have enjoyed this program.