The AI Daily Brief: Artificial Intelligence News and Analysis

The Debate Over Anthropic’s New Product: Price or Existential Dread?

26 min
Mar 10, 2026about 1 month ago
Listen to Episode
Summary

The episode analyzes the controversial reception of Anthropic's new code review feature, which costs $15-25 per review and sparked debate about AI replacing human code review processes. The discussion explores broader themes of AI disruption in software development, pricing power of AI companies, and existential concerns among developers as AI agents reshape traditional workflows.

Insights
  • AI inference costs are beginning to resemble labor costs rather than traditional software costs, forcing organizations to reconsider budget allocations
  • The software development lifecycle is collapsing from discrete sequential stages into continuous AI-driven iteration loops
  • Developer resistance to AI code review reflects deeper identity and existential concerns about the changing nature of engineering work
  • Major AI companies are consolidating power by building competing versions of successful third-party tools, potentially eliminating app-layer competitors
  • Enterprise AI adoption requires balancing massive token budgets with demonstrable headcount savings to justify continued investment
Trends
Consolidation of AI development tools within major platformsShift from human code review to AI-automated processesRising AI inference costs creating budget pressure for enterprisesAI companies moving up the stack to capture more valueWorld models emerging as next major AI categoryEnterprise AI security and compliance becoming critical requirementsDeveloper productivity tools requiring third-party certificationCollapse of traditional software development lifecycle stagesAI agents handling end-to-end development workflowsMajor tech companies partnering rather than competing on AI features
Topics
AI code review automationSoftware development lifecycle transformationAI inference pricing modelsEnterprise AI adoption costsDeveloper workflow disruptionAI agent certification standardsWorld model technologyAI security and complianceCode review bottlenecksAI company consolidationCustom silicon vs GPU dependencyEnterprise AI platform integrationDeveloper identity crisisAI-generated code qualityAgentic engineering workflows
Companies
Anthropic
Launched controversial code review feature priced at $15-25 per review, sparking industry debate
Nvidia
Planning to launch Nemo Claw, an open-source AI agent platform similar to OpenClaw
Microsoft
Launched Copilot Cowork in partnership with Anthropic, responding quickly to competitive pressure
OpenAI
Acquired AI security platform PromptFu and competing with Claude in code review capabilities
AMI Labs
Yann LeCun's new startup raised $1 billion in Europe's largest seed round for world model research
Cognition
Released competing Devin Review product for code review and understanding complex PRs
Salesforce
Named as potential premier partner for Nvidia's upcoming Nemo Claw AI agent platform
Google
Mentioned as potential Nvidia partner and for their Genie world model technology
Meta
Reportedly backing off custom silicon projects due to immediate compute needs
Adobe
Listed as potential partner for Nvidia's enterprise AI agent platform launch
People
Jensen Huang
Nvidia CEO called OpenClaw 'maybe the most important release of software ever'
Satya Nadella
Microsoft CEO announced Copilot Cowork launch via Twitter
Yann LeCun
Former Meta AI chief founded AMI Labs which raised $1 billion for world model research
Alex Le Brun
AMI Labs CEO stated large language models aren't right for real-world understanding
Boris Czerny
Claude Code creator reported 200% increase in code output per Anthropic engineer
Sean Wang
Latent Space co-host discussing code review as 'final boss of agentic engineering'
Ankit Jain
Entrepreneur wrote essay 'How to Kill the Code Review' predicting end of human reviews
Ethan Malik
Questioned whether Microsoft will provide access to best models in their Cowork implementation
Dan Adler
SourceGraph CEO warned about potential whiplash on token budgets without headcount savings
Bret Winton
ARC analyst shared revenue projections showing AI companies will exceed Windows/Office by 2028
Quotes
"Human written code died in 2025 code code reviews will die in 2026."
Ankit Jain
"AI agents didn't make the SDLC faster, they killed it."
Boris Taine
"If you can't make $2,700 a month with our product, you've got bigger problems to deal with."
Flo (referencing Michael Bloomberg)
"Code review wasn't even ubiquitous until around 2012-2014. There just aren't enough of us around to remember."
Ankit Jain
"What Microsoft built in around 40 years they will have surpassed in around five."
Bret Winton
Full Transcript

Today on the AI Daily Brief, a big dust up around Anthropic's new product. How much of it is about price and cost versus some larger existential ennui before that in the headlines? The Open Qualification of the World continues. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive in. First of all, big thank you to today's sponsors. Recall AI, aiuc, Blitzi and Robots and Pencils. For an ad free version of the show, go to patreon.com aidaily brief or you can subscribe on Apple Podcasts. To learn about sponsoring the show. Send us a note at sponsorsaidailybrief AI aidailybrief AI is also where you will be able to find all of the things happening in this ecosystem. But with that out of the way, let's talk Nvidia and the OpenClaw revolution. We have been tracking closely the clawfication of the world and last week no less than Jensen Huang had some very positive words for the project, calling it maybe the most important release of software ever. It felt perhaps like a bit of hyperbole, but with a new report from Wired, it makes a little bit more sense. Wired reports that Nvidia is planning to launch their own AI agent platform that is not dissimilar to OpenClaw, they write. Nvidia is planning to launch an open source platform for AI agents. The chipmaker has been pitching the product, referred to as Nemo Claw, to enterprise software companies. The platform will allow these companies to dispatch AI agents to perform tasks for their own workforces. Companies will be able to access the platform regardless of whether their products run on Nvidia chips. Now the timeline for this seems to be around Nvidia's annual developer conference, which happens next week. Nvidia apparently has been reaching out to very premier partners like Salesforce, Cisco, Google, Adobe and CrowdStrike for partnerships around the platform. Now Wired writes for Nvidia. Nemo Claw appears to be part of an effort to court enterprise software companies by offering additional layers of security for AI agents. It's also another step in the company's embrace of open source AI models, part of a broader strategy to maintain its dominance in AI infrastructure at a time when leading AI labs are building their own custom chips. Nvidia's software strategy until now has been heavily reliant on its CUDA platform, a famously proprietary system that locks developers into building software for Nvidia's GPUs and has created a crucial moat for the company. What's interesting is that last year there was a lot of discourse about this idea of Nvidia moving up the stack and diversifying away from just pure chips, sort of as a hedge against how the world might change in positioning themselves for those potential different outcomes where people are less reliant on Nvidia chips. Whether it's because they've got their own custom silicon or because the nature of the field has changed, I feel like there's less of a sense of that being a likely outcome right now than there was last year. You've basically seen a lot of the big players like Meta seemingly back off of their custom silicon projects. Not, I don't think, because they're not interested anymore, but because the simple reality is that right now they just need the compute at basically any cost and don't have time to wait around to figure out their own systems. Now, I don't think that Nvidia, as smart as they are, is going to stop hedging against future changes, but it will be interesting to see if and where any of these various experiments that they have outside of chips themselves start to actually become a more significant business line for the company in the future. Next up, Microsoft Gets in the Coworking Game On Monday, Microsoft CEO Satya Nadella tweeted, announcing Copilot Cowork, a new way to complete tasks and get work done in M365. When you hand off a task to Cowork, it turns your request into a plan and executes it across your apps and files, grounded in your work data and operating within M365 security and governance boundaries. Axio sums up the move this way. Microsoft launched Copilot Cowork on Monday, an enterprise AI agent built on Anthropic's technology and named after the Anthropic product that wiped hundreds of billions off of Microsoft's market cap. In other words, if you can't beat them, join them. And indeed, this is not just a copycat version of Cowork. This is actually a collaboration with Anthropic. Working closely with Anthropic, they write, we have brought the technology that powers Claude cowork into Microsoft 365 copilot. Microsoft is also increasingly pushing the idea of being able to select between different models. In that same blog post, they write, your work is not limited by one brand of models. Copilot hosts the best innovation from across the industry and chooses the right model for the job, regardless of who built it. Now there is, of course, a lot of memeing going around about Microsoft being behind or just copying others. But in this case I think their speed to response on this is actually pretty good. There are lots and lots of people who by virtue of their work environments are stuck in the Copilot ecosystem. And for there to be less than a two month gap between when Anthropic drops Cowork and when Microsoft offers their version in Copilot, that's a lot better than Copilot users have expected in the past. I also think there's a certain humility and intelligence in not trying to do a janky version of it, but just partnering with Anthropic to actually get the thing close to the same level of capability that the Claude version has. Sean Wang writes, Wait, did Microsoft really clone Claude Cowork? That's kind of based Still, Ethan Malik brings up the big question that will or won't probably dictate success for this. That will have a big impact on whether this thing is seen as successful, malik writes. Microsoft seems to be launching its own branded version of Cowork. A big question is whether it will continue to use lower end models without telling you. Also whether it will keep pace as the space evolves or is it a one off? To me, the question of whether Microsoft will give access to the most recent and best models is big. Given that GPT5 beat or tied humans in expert tasks less than 38% of the time, while months later GPT 5.4 beat or tied human experts 82% of the time. This really matters. Another big question, he writes. Is this limited to producing materials that use Microsoft apps? How does it handle the fact that so much of what makes Claude Cowork interesting is the fact that it can improvise all sorts of output using code, adding a little bit of trajectory context to this. Bret Winton from ARC shared the revenue projections from Anthropic and OpenAI as compared to Windows and Office's top line revenue, showing that if indeed Anthropic and OpenAI are correct, they will exceed Windows and Office revenue sometime in 2028. As Brett wrote, what Microsoft built in around 40 years they will have surpassed in around five. Next up today we finally get some news about former Meta AI chief Yann LeCun's new startup AMI Labs has raised a billion dollars in what is Europe's largest seed round ever. The company is officially called Advanced Machine Intelligence Labs and raised from Temasek, Bezos Expeditions and Nvidia. Setting some expectations, new CEO Alex Le Brun says Anything that involves understanding the real world we think large language models and generative AI in general is not the right solution. We have at least a year of research before deploying our first real world applications. But this is not an applied AI company honing in on what the company is doing. Labrun also told TechCrunch my prediction is that World models will be the next buzzword. In six months, every company will call itself a world model to raise funding. Certainly between Fei Fei Li's World Labs and Google's genie, I think we're likely to see a lot more from world models in 2026. Lastly, today, more consolidation in the AI space OpenAI is acquiring AI security platform PromptFu. Now what's interesting about this is less the deal itself and more what it implies for OpenAI and their seriousness going after the enterprise. They write that once the acquisition is finalized, they will integrate PromptFu's technology directly into OpenAI Frontier, which is of course their platform, as they put it, for building and operating AI coworkers. Basically their enterprise platform. They write. As enterprises deploy AI co workers into real workflows, evaluation, security and compliance become foundational requirements. Enterprises need systematic ways to test agent behavior, detect risks before deployment, and maintain clear records to support oversight, governance and accountability over time. One of the things that I expect to see this year is a ton of consolidation around building the true enterprise AI stack inside the big labs. Keep an eye for more acquisitions in that theme, but for now that is going to do it for today's headlines. Next up, the main episode. Why is there always a meeting bot in your Zoom call? Blame Recall AI. Recall AI powers the meeting bots and desktop recording apps behind products like Cluli, HubSpot and ClickUp. They handle the hard infrastructure work, capturing clean recordings, transcripts and metadata across Zoom Google Meet Microsoft Teams in person, meetings and more so developers don't have to build it themselves. If you're building a meeting note taker or anything involving conversational data, Recall AI is the API for meeting recording. Get started today with $100 in free credits at Recall AI AIDB. That's Recall AI AIDB. There's a new standard that I think is going to matter a lot for the enterprise AI agent space. It's called AIUC1 and it builds itself as the world's first AI agent standard. It's designed to cover all the core enterprise risks, things like data and privacy, security, safety, reliability, accountability and societal impact, all verified by a trusted third party. One of the reasons it's on my radar is that 11 labs who you've heard me talk about before and is just an absolute juggernaut right now. Just became the first voice agent to be certified against AIUC1 and is launching a first of its kind insurable AI agent. What that means in practice is real time guardrails that block unsafe responses and protect against manipulation plus a full safety stack. This is the kind of thing that unlocks enterprise adoption. When a company building on 11 labs can point to a third party certification and say our agents are secure, safe and verified, that changes the conversation. Go to aiuc.com to learn about the world's first standard for AI agents. That's aiuc.com you've tried in IDE copilots. They're fast, but they only see local silos of your code. Leverage these tools across a large enterprise codebase and they quickly become less effective. The Fundamental Constraint Context blitzi solves this with infinite code context understanding your code base down to the line level dependency across millions of lines of code. While copilots help developers write code faster, blitzi orchestrates thousands of agents that reason across your full code base. Allow Blitzi to do the heavy lifting, delivering over 80% of every sprint autonomously with rigorously validated code, Blitzi provides a granular list of the remaining work for humans to complete with their copilots. Tackle feature additions Large scale refactors Legacy Modernization Greenfield initiatives All 5x faster See the Blitzi difference at blitzi.com that's B L I T Z Y.com Today's episode is brought to you by Robots and Pencils, a company that is growing fast. Their work as a high growth AWS and Databricks partner means that they're looking for elite talent ready to create real impact at Velocity. Their teams are made up of AI native engineers, strategists and designers who love solving hard problems and pushing how AI shows up in real products. They move quickly using roboworks, their agentic acceleration platform so teams can deliver meaningful outcomes in weeks, not months. They don't build big teams, they build high impact, nimble ones. The people there are wicked smart. With patents, published research and work that's helped shaped entire categories. They work in velocity pods and studios that stay focused and move with intent. If you're ready for career defining work with peers who challenge you and have your back, Robots and Pencils is the place. Explore Open roles@rootsandpencils.com careers that's robotsandpencils.com careers welcome back to the AI Daily Brief. In 2026, the one thing that's clear to everyone is that things are moving very fast. Even for an industry where it already felt like things were going quickly, we've ratcheted up another notch. As part of that, everyone is grappling with a series of different issues, everything from the very positive how do I take advantage of all these new superpowers that I've been given to the exciting but it's a challenge kind of questions like how do we redesign our organization around these new capabilities? To the much more existential questions of what does it mean that the work that I've always done is no longer the work that I will be doing? In many ways, it feels to me like all of those debates came home to roost around a single product this week, which was Anthropic's new code review feature. Now, this is not a particularly complicated product to explain, claude writes. When a PR opens, Claude dispatches a team of agents to hunt for bugs. Code review is a key part of the development lifecycle, so it stands to reason that AI would be trying to add new efficiency to it. And certainly Anthropic is not the only company thinking in these directions. Cognition recently released Devin Review, which they call a reimagined interface for understanding complex PRs. In their announcement tweet they wrote code review tools today don't actually make it easier to read code. Devin Review builds your comprehension and helps you stop slop. Now they go through a whole bunch of ways in which the product is different, and it got pretty good response. A thousand people bookmarked that tweet and three quarters of a million people viewed it. That is of course, nothing compared to the nearly 14 million who viewed the Claude post, which speaks not only to the relative size of Anthropic, but to the controversy surrounding this new product. So what actually was controversial? On the surface of it, this seems like it would be highly value additive. While they are biased and incentivized to say so, certainly it seems like all the folks inside Anthropic who are using it have had really positive experiences with it. Alex Albert, who does Claude and Dev Relations, says this has been a game changer for our internal engine research teams. Rare to see a product get this much praise from some of the top engineers I know. Boris Czerny, the creator of Claude Code, points out, we built it for ourselves. First code output per anthropic engineer is up 200% this year and reviews were the bottleneck. Personally, I've been using it for a few weeks and have found it catches many real bugs that I would not have noticed otherwise. Jared Sumner writes, Been using this in Bun's Repo Fun JavaScript. Being a company that joined Anthropic recently, Jared continues, this in my opinion is the best product in the code review category today. It regularly catches extremely subtle bugs and rarely makes mistakes. Claude Coates Tariq writes Code review is so, so good. One of those things I can't remember how I lived without. What's more, the discussion of code review and the inevitable changes to it is something that the larger agentic engineering community has been talking about recently, irrespective of this Claude product. Sean Wang SWIX of Latent Space wrote, this is the final boss of agentic engineering Killing the Code review at this point, multiple people are already weighing how to remove the human code review bottleneck from agents becoming fully productive. I'm not personally there yet, but I tend to be three to six months behind these people and yeah, it's definitely coming now. He points to a guest essay shared on Latent Space by entrepreneur Ankit Jain called How to Kill the Code Review. The subheader, which encapsulates the thesis pretty clearly is Human written code died in 2025 code code reviews will die in 2026. I won't read the whole thing, but a couple of excerpts. Humans already couldn't keep up with code reviews when humans wrote code at human speed. Every engineering org I've talked to has the same dirty PRs sitting for days, rubber stamp approvals and reviewers skimming 500 line diffs because they have their own work to do. We tell ourselves it is a quality gate, but teams have shipped without line by line review for decades. Code review wasn't even ubiquitous until around 2012-2014. One veteran engineer told me, there just aren't enough of us around to remember. And even with reviews, things break. We have learned to build systems that handle failure because we accept that review alone wasn't enough. This shows in terms of feature flags, rollouts and instant rollbacks. The next section in the core thrust of Ankit's argument is called we have to give up on reading all the code, he continues. Teams with high AI adoption complete 21% more tasks and merge 98% more pull requests. But PR review time increases 91% based on data from over 10,000 developers across 1255 teams. Two things are scaling exponentially the number of changes and the size of changes. We cannot consume this much code. On top of that, developers keep saying that AI generated code requires more effort than reviewing code written by their colleagues. Teams produce more code Then spend more time reviewing it. There is no way we win this fight with manual code reviews. Code review is a historical approval gate that no longer matches the shape of the work. Now, Boris Taine wrote something about this as well. His more broadly themed piece from February of this year was called the Software Development Lifecycle Is Dead. Boris writes, AI agents didn't make the SDLC faster, they killed it. I keep hearing people talk about AI as a 10x developer tool, that framing is wrong. It assumes the workflow stays the same and the speed goes up. That's not what's happening. The entire lifecycle, the one we've built careers around, the one that spawned a multibillion dollar tooling industry, is collapsing in on itself and most people haven't noticed yet. Boris argues that the software development lifecycle as we learned it is a relic. He writes, here is the classic software development lifecycle most of us were taught. And apologies for those of you who are just listening, but basically it's a circular chart that goes from requirements to system design to implementation to testing to code review to deployment to monitoring, and then back to requirements and through the system again. Boris writes, every stage has its own tools, its own rituals, its own cottage industry. Jira for requirements, figma for design, VS code for implementation, Jest for testing, GitHub for code review, AWS for deployment, Datadog for monitoring. Each step is discrete, sequential handoffs everywhere. Now, here's what actually happens when an engineer works with a coding agent. In this chart, there is one starting point, which is intent, which moves to the agent, and then the agent works in a circular fashion through code test deployment to the question of does it work? If the answer is no, it's back to the agent for more code tests and deployment. Back to the question of does it work? And then as soon as the answer to does it work is yes, the code gets shipped. Boris's point is this quote, the stages collapsed. They didn't get faster, they merged. The agent doesn't know what step it's on because there are no steps. There's just intent, context and iteration. Boris is talking about the entire development process. But to relocalize it back to code review, which is the subject of this particular product, his section on code review is called Give it up. Boris writes, the pull request flow needs to go. I was never a fan, but now it's just a relic of the past. I know that's uncomfortable. Code review is sacred. It's how you catch bugs, share knowledge, maintain standards. It's also an identity thing. We're engineers, and reviewing code is what engineers do. But clinging to the PR workflow in an agent driven world isn't rigor, it's an identity crisis. Think about it. An agent generates 500 PRs a day. Your team can review maybe 10. The review queue backs up. This isn't a bottleneck worth optimizing. It's a fake bottleneck, one that only exists because we're forcing a human ritual onto a machine workflow. All right, so the point here that I'm trying to make is that clearly there is something in the air and big questions and perhaps an inevitable change coming to the way that we think about code review. And yet still, I was genuinely surprised to see how much antipathy there was towards this code review announcement. There were a few reasons for that. One has to do with a sort of who's going to watch the watchers Idea Professor Bo Wang writes Did Claude Code write Claude code review? Next question. Can Claude code review review Claude code's code and make it better and even create a better Claude code review? Now he's a little bit tongue in cheek, but the idea of whether the code review is likely to bring the same biases to the review that might have created the mistakes in the code in the first place if people wrote their code with Claude code is I think, maybe a more practical question that a lot of folks have. The much bigger part of the response came around cost. The big thing that really caught people's attention was around the pricing. In the pricing section of the Claude code review docs it says code review is billed based on token Usage. Reviews average 15 to $25 Scaling with PR size, code base complexity and how many issues require verification. And boy were people shocked at this. Var Epsilon writes the Claude code Max$200 a month plan is literally infinite tokens. You can just write the one prompt to do a PR review locally, save it as a skill and you get unlimited reviews. 15 to 25 per review is nuts. Dagster Labs Nick Schrock writes 15 to 25 USD per review my lord. Alex Kaplan says $20 for a PR head blown emoji, exclamation?emoji devinreview.com is free. So one part of this I think is just a sticker shock argument. If you got most developers used to paying in the tens of dollars for coding tools and seeing review type features bundled into a broader plan, then this amount obviously seems much larger. What's more, people are immediately doing the scale math. If a team opens up lots of PRs 25 per review sounds like it could explode very quickly into hundreds of thousands per developer per month. Now it doesn't really matter that Anthropic is explicitly targeting a deeper review experience using multiple specialized agents. That is probably not using it for every single time you have to review anything. But still people are just extrapolating out from that number and coming up with some very big numbers on the other side. Another piece of this though, is that I think it shows some chinks in the Anthropic and Opus Armor right now. For a very long time, Anthropic was the only game in town when it came to coding. This has been well documented on this show to the point where we don't really need to discuss it. However, ever since the release of GPT5, OpenAI has been explicitly attempting to close that gap and even get out ahead, and increasingly there is some evidence that that effort has been successful. Wes Winder writes, I really don't understand why you would pay $25 for Claude to review a single PR when Opus 4.6 isn't even the best model for deep code review. GPT 5.4 is the only model I trust for reviews right now. Shopify product builder Gil writes, imagine spending 15 to $25 on code review and you still have daily downtime and buggy releases. I'd be more confident in this feature if their production quality was higher. Fairman's Tebow writes, in all our benchmarks, Claude reviews are always just the worst. But don't you worry, now you can pay between $15 and $25 per freaking PR and you'll have good reviews. Are you kidding me? And even some of the first people who are testing it aren't necessarily coming away all that impressed. Daniel San tested Claude code review and said, I'm always the first to get excited when Claude ships something new, but in this case, enabling code review is just not worth it. Now none of this is to say that there aren't people who are taking the other side of this argument, Lindy founder Flo writes, People's comments on the $15 to $25 per PR price tag remind me of Michael Bloomberg's answer to people balking at the $2,700 per month cost of the Bloomberg terminal. If you can't make $2,700 a month with our product, you've got bigger problems to deal with. Open Code's Reese Sullivan writes, A $15 to $25 PR review bot that catches an incident that would have cost the company $5 million in breached SLAs and reputation is a no brainer. I think maybe ultimately the even more interesting dimension of the cost part of the conversation actually has to do with the implications for where things are going, I think. Increasingly, as AI and especially AI coding weaves itself deeper into how we do work, cost profiles which were somewhat ignorable before become not ignorable anymore. Another way of saying it is that AI inference costs start to look a little bit closer to labor costs than to software costs. Source Graph CEO Dan Adler writes, I spend much of my week every week talking to large enterprise buyers. The appetite for tokens is insatiable. C level FOMO is off the charts and every spare dollar is going into Claude, code, cursor, amp, etc. Tens or even hundreds of millions of dollars in engineering organizations that cost billions in salaries seems reasonable, but if CTOs can't deliver headcount savings, we're going to see some real whiplash on token budgets in the next two to four quarters. Another way to put what Dan is saying is that something's got to give. The cost of agentic engineering can't keep rising without there being commensurate cost cuts somewhere else in the organization. Anonymous 4o account on Twitter writes, this marks the beginning of the end of the subsidized inference era. It will only go higher. I think we are just beginning to grapple with what the full bore cost of AI when fully utilized is going to look like and what it means for the structure of organizations. There is, of course, however, another piece of this one that Boris got in that essay that I read before. As he put it, it's also an identity thing where engineers and reviewing code is what engineers do. From some corners you can almost feel the existential nature of the response. Look at how Montano puts it. We need to admit defeat. We won't be reviewing code before it goes to production. Humans are already the bottleneck now. It wasn't strictly related to this release, but there's been a viral video going around Twitter x from mo IO with the caption on the video I was a 10x engineer, now I'm useless. It's actually less dramatic than the caption makes it sound, but it's a real honest exploration of a lot of the feelings that many developers are having right now as the fundamental nature of what they do as developers has changed underneath their feet almost overnight. And it does feel to me a bit like part of the response to the code review. That feels a bit like watching the last part of the sandcastle that they've spent their whole lives building, washed away into the ocean. And why I think this part matters, regardless of whether you're an engineer, is that as I've frequently said on this show, if you want to understand what other types of knowledge workers are going to be feeling like in a year or two years, watching how developers handle these changes and how the broader shape of their field is shifting is the closest thing we have to peering into the future. There is a deep set of existential questions in this liminal moment, and I think how folks resolve them on a personal and professional and organizational level is going to create a template and a blueprint for how we deal with AI disruption in other areas. Now there is one more piece of the negative response to code review that I think is worth tracking as well, which is not just about cost, but about pricing power. And this, to me maybe represents another chink in Anthropic's armor. Although I don't think it's limited to Anthropic alone. Garbrite feels like the Wild west days of pricing. The general store has you hooked on their supply, they know it, and they're telling you how much they're going to fleece you because they can, Ejaz writes. LOL Anthropic just killed a $50 billion industry with a single feature. Again, companies pay 50k a year to scan their code for vulnerabilities. Anthropic's code review does it for you in minutes for a fraction of the cost. Broadloom's Todd Saunders uses an analogy. Anthropic is the new Amazon build on our platform and once you get scale we will build a basics version of your product and put you out of business instantly. That was a quote tweet from this from Varun Ram Ganesh who wrote, at this point it's pretty clear that if you are an app layer company using Claude code SDK, it is inevitable that Anthropic sees your usage and then develops that tool in house. One of the potential reckonings in the AI space is going to be questions of power and consolidation around the very small number of neutron star companies that are just absorbing everything around them. It is worth pointing out, of course, that this particular product is not guaranteed to work. Certainly the teams at Cognition and OpenAI are using this as a bonanza for their own marketing, and maybe the market will force the price of AI code review down. Still, it's very clear, taking a step back, that the response around the code review product was about more than just price. Cut to the quick of the types of issues that are just going to be part of our every day in the period that's coming up next. We will of course continue to track this. I will say I have the sense, like Swix, that perhaps while it takes three to six months for everyone to get there, it is very likely to me that this debate or conversation seems kind of quaint in retrospect. For now, that is going to do it for today's AI Daily Brief. Appreciate you listening or watching as always. And until next time, peace.

0:00