Welcome back to the AI Update. It's date zero. Glad to be here. So it is mid-February, 2026. And look, if you have been trying to keep your head above water with the news cycle this week, you were probably exhausted. Yeah, completely drained. Because we aren't just talking about a busy week here. We are talking about a week where the tectonic plates of the entire industry just shifted. It really, really does feel that way. You know, for the last few years, we've basically been in this, what you could call the chatbot era. Right. You type a prompt, you get a paragraph, maybe a recipe or an email draft. But that era, it's officially over. As of this week, we have fully entered the era of the industrialization of intelligence. Industrialization is definitely the right word. Because looking at the stack of sources we have for this deep dive, we aren't just looking at software updates anymore. We're looking at gigawatt scale infrastructure, nation states fighting over processor allocation. And, well, valuations that are starting to look like the GDP of small European countries. Yeah, the scale is just unprecedented. And at the center of this storm, we have this massive high-stakes duel between the two titans of the industry, OpenAI and Anthropic. That is the headline. And to really make sense of this week for you, we need to pull apart three distinct threads from our sources that all just collided at once. Umia Mao. First, you have the brains. So the release of these new reasoning models like GPT 5.2 and CLAWD 4.6, these things don't just talk anymore, they think. Right, which is a huge paradigm shift. Exactly. Then, second, you have the brawn, the physical reality of AI. We are talking about multi-billion dollar data centers with sci-fi names like Stargate and Project Rainier. Consuming insane amounts of electricity. Literally enough to power millions of homes. And finally, and honestly, this is the part that sticks with me the most from the research, the conscience. Yeah, the human cost. Because while the money is flowing and the tech is scaling up, the human cost is becoming impossible to ignore. We saw a wave of resignations this week from the top safety leads at these labs, effectively warning us that the world is in peril. That part is, yeah, it's heavy. And we're going to get there, I promise. Yeah. But let's start with the shiny new toys, the brains. It felt like an absolute blitz this week. OpenAI really came out swinging. They absolutely did. They have officially retired the legacy GBT40 models. That entire architecture is now considered completely obsolete. Wow. Yeah. The new flagship is GPT 5.2. But what is really crucial here is how they've split the product line. They've basically acknowledged that chatting and thinking are two entirely different sports. Okay, help me unpack this a bit. Because I opened my app this morning, and I noticed I have choices now. I've got instant and I've got thinking. What is actually happening under the hood there? Is it just, you know, a marketing rebrand? It's definitely not just marketing. Think of it kind of like the human brain. Psychologists often talk about system one and system two thinking. Right. Daniel Kahneman stuff. Exactly. So system one is fast, instinctive thought, like knowing two plus two is four or finishing a really common sentence. That is the instant model. It's designed for conversation. It's warm, fast, extremely low friction. Okay, and System 2. System 2 is slow, complex reasoning. It's what you do when you sit down to solve a complex math problem or plan a multi-city travel itinerary. That's what the thinking toggle does. OpenAI as a GPT 5.2 thinking model actually pauses. It just sits there. It sits there. It explores different logic paths. It tests hypotheses internally, and only then does it answer you. And this is where you get that specific toggle right, the settings for standard, light, and extended. That's the one. And that extended setting is the real deal. We are seeing reports right now from the physics community that GPT 5.2, using that extended thinking time, reportedly derived a novel result in theoretical physics just days after it was released. Wait, hang on. Novel as in it wasn't in its training data. It actually figured it out from scratch. That is the claim. It is not just regurgitating a textbook. It is computing new logic paths to find new information. That is a fundamental shift from just predicting the next word. It's basically the difference between a parrot and a professor. That is wild. Yeah. But they didn't stop there either. They also dropped something specific for the coders back on February 12th. GPT 5 Codex Spark Terrible name Oh awful name but incredible technology This is their dedicated agentic coding model Agentic We keep hearing that word thrown around everywhere in these sources For the non-engineers listening, how is that actually different from just asking ChatGPT to say, write me a Python script? It's a great question. Generative AI, the old stuff, is a tool you talk to. You say, write code, it writes code. You paste it in, it breaks. You paste the error back. It's a back and forth ping pong match. Agenic AI is a worker. You give it a goal. So I'm basically handing it a job description. Basically, yes. Say, hey, this software has a bug in the login sequence. Find it, fix it, write a test to prove it's fixed, and deploy the update. And it goes off and does that work entirely independently. It navigates the file systems, runs the tests, acts just like a human engineer would. And this new model is actually good at doing that. It scored a 56.4% on the SWE Bench Pro. Yeah, that is the gold standard benchmark for software engineering. Now, 56% might sound a little low to you out of 100, but previous models were stuck in the single digits or low teens. This thing isn't just a coding helper anymore. It is performing at the level of a junior developer. So OpenAI is pushing this personality angle for regular users, making the AI feel warmer with the instant model, while simultaneously building these absolute autonomous powerhouses for the pros. That is exactly the strategy, bifurcation. But, and here's where the narrative gets really complicated. And Dropik didn't just sit back and watch them do this. No, they did not. They launched a massive counterattack mid-month with Claude Sonnet 4.6 and Opus 4.6. And from what I'm seeing in the developer forums and the data we've pulled, the builders are actually flocking to Claude, not OpenAI. The stats completely back that up. This is probably the biggest untold story of 2026 so far. And Dropik has effectively captured the developer market. They now hold 54% of the code generation market share. Compared to OpenAI's 21%. Exactly. That is a massive, massive flip. It really is. OpenAI has the Super Bowl commercials and the cultural fame, but the people building the actual software are using Claude. Why is that? It comes down to reliability and the underlying architecture. Let's look at Opus 4.6. While OpenAI gives you that manual toggle we talked about, choosing how hard the model thinks, Anthropic introduced what they call adaptive thinking. So you don't have to tell it to think hard. It just knows. It autonomously decides. Based on the complexity of the prompt, Opus 4.6 determines the depth of reasoning required. It completely removes the friction for the user. But the real killer app here is Sonnet 4.6, which is their mid-tier model. It has a massive context window, 1 million tokens, and it is purpose-built for those exact agents we just mentioned. So if I want an AI to read an entire massive code base, understand the history of the project, and then work on it autonomously for three days, I use Claude. Basically, yes. The sources strongly suggest Sonnet is the premier engine for these long horizon tasks. So we really are moving from tools that chat to tools that do work. That is the big shift. But here is the catch. That shift from chatting to thinking, from generating to acting, requires a staggering amount of computing power. Which perfectly brings us to the second thread, the brawn. The gigawatts. I love this part of the deep dive because it makes the digital world feel so viscerally physical. We keep hearing about these valuations, OpenAI targeting over $800 billion, Anthropic at $380 billion. But they aren't just hoarding this cash in a giant vault. They are pouring it into concrete, steel, and massive energy grid upgrades. They have absolutely no choice. The industrialization of intelligence requires industrial-scale electricity. You cannot run a thinking model on a laptop. Let's talk about OpenAI's Stargate project. Stargate. It sounds like a secret government operation from a 90s sci-fi movie. It essentially is a government-level operation, just corporate run. This is a $500 billion initiative targeting 10 gigawatts of AI compute. 10 gigawatts. Just to give you a sense of scale, 10 gigawatts is roughly what you would need to power the entirety of New York City. And this isn't just a proposal on a slide deck anymore. They have sites actually up and running, right? Yes. The flagship U.S. site in Abilene, Texas, which they built with Oracle, is already online. They've got racks and racks of NVIDIA GB200 chips just humming away. But what really fascinating politically speaking is the global expansion side of this The UAE deal Exactly You got Stargate UAE and Abu Dhabi which is a partnership with G42 Oracle and SoftBank That's a one gigawatt cluster going online in Q3 of this year. This directly ties into that concept you've mentioned before, sovereign inference, which frankly sounds incredibly dystopian. What does that mean exactly in this context? It means computing power is fundamentally becoming a geopolitical asset, like oil reserves or freshwater. Nations like the UK with Stargate UK, Norway, the UAE, they are realizing that if they don't have domestic Stargate level infrastructure, they're going to be forced to rent intelligence from the US or China forever. So sovereign inference basically means owning the means of production for intelligence securely within your own borders. Precisely. If you don't own the compute, you don't own the future. It's a brand new arms race, but the ammunition is compute. Now, Anthropic is taking a slightly different approach here. They aren't just universally doubling down on NVIDIA chips like OpenAI is. No, and I really want to dig into this because it's a huge divergence. Anthropic has essentially hitched its wagon to Amazon and AWS. They are building Project Rainier in Indiana. Rainier, another massive, imposing name for a massive project. But the key detail here is the hardware. They aren't using NVIDIA chips at all. They are using Amazon's custom Tranium 2 chips. It is the world's largest non-NVIDIA AI cluster. We are talking 500,000 chips right now, scaling to a million by late 2025 or early 2026. This is a monumental strategic bet. But why does the chip brand matter to the listener? Is it just a Pepsi versus Coke thing? No, it's much, much more than just branding. It is all about supply chain bottlenecks. Right now, the entire planet is fighting tooth and nail for NVIDIA chips. There is a waitlist that is effectively years long. By using Amazon's custom Tranium chips, Anthropic is betting heavily on availability and efficiency. Ah, okay. So while OpenAI is stuck waiting in line for NVIDIA to ship hardware, Anthropic and Amazon can just print their own chips. Exactly. If they can scale faster simply because they aren't bottlenecked by the global supply chain, they might actually be able to overtake OpenAI on sheer raw compute volume, even if the individual chips are slightly different architecturally. It's a massive diversification strategy. So we have the brains getting exponentially smarter, the brawn getting massively bigger, but we also have the rules trying to catch up, or at least scrambling to. We saw a massive legal settlement this week that sort of flew under the radar amidst all the flashy tech news. The Barts v. Anthropic case? We really cannot overstate this. This is historic. Historic and incredibly expensive. Anthropic agreed to pay $1.5 billion. That is the largest U.S. copyright settlement in history. It is huge. And we need to be very clear about why they settled. The plaintiffs, the authors, didn't just argue the vague idea that training on books is bad. They specifically alleged that Anthropic trained on shadow libraries. Shadow libraries. We're talking about great pirate sites, right? Yes. Huge illegal collections like Libgen and Palimi. The argument wasn't the philosophical is AI learning fair use debate. The argument was you specifically downloaded stolen pirated files to build your product. It bypassed the hazy copyright debate and went straight for the theft allegation. So Anthropic writes a check for $1.5 billion. Does this solve the copyright issue for AI? Can these companies just pay a massive fine and keep going? That is the catch. The settlement only covers past conduct, everything pre-August 2025. It absolutely does not set a legal precedent for future licensing or liability for AI outputs. It doesn't say you have to pay authors for every output from now on. It basically just says, OK, we messed up using those specific pirated files. Here's a billion and a half dollars of go away money. So it wipes the slate clean, but it doesn't actually build a new road for how to do this legally tomorrow. Not at all. And meanwhile, the federal government is basically trying to bulldoze the road for them. Right. With the executive order. Yeah. We had that U.S. executive order back in December 2025 that is basically aimed at preempting state level AI laws like the Colorado AI Act. The goal is to create what they call a minimally burdensome national framework. Right And to be clear reporting on this impartially the federal government approach seems focused on ensuring the U maintains its lead in things like the Stargate race They actively trying to prevent a patchwork of local safety regulations from slowing down the deployment of these massive data centers which is a perfect segue to the final and honestly most unsettling segment of this deep dive the conscience because for all this unprecedented money and all these technical breakthroughs the people actually building these systems sound well terrified The vibe shift inside the major labs does seem palpable. Despite the skyrocketing valuations, the mood is really dark. We saw a wave of very high-profile exits this February from OpenAI, Anthropic, and XAI. And we need to emphasize, these aren't your standard. I'm moving on to explore new challenges, corporate resignations. These are blaring warning shots. Let's look at Marine Ake Sharma from Anthropic. He posted his resignation letter publicly on X. He didn't thank the team for the great memories and free snacks. He literally stated, quote, the world is in peril. He did. And remember, Sharma was a top safety lead. His entire job was to stop these highly capable systems from going off the rails. In his letter, he cited interconnected crises and explicitly stated he felt immense pressure to prioritize product rollout over safety protocols. He even referenced Einstein's regret about his role in the atomic bomb, courting Einstein saying, I would have become a watchmaker. That is, I mean, that's chilling coming from a guy who literally knows the code inside out. It's devastating. He is leaving the most lucrative industry on earth to go become invisible and study poetry. That tells you volumes about his confidence in the current trajectory of the technology. And the crazy thing is, Anthropik heavily brands itself as the safe option. They are the constitutional AI company. That's the glaring contradiction here. Because looking at our sources, Anthropic's own internal reports for these brilliant new models, the ones we were just praising for their adaptive thinking, they show elevated susceptibility to misuse. We're talking about the models potentially helping bad actors develop chemical weapons. The tech is just getting stronger, much faster than the safety guard reels can be built. And it is certainly not isolated to anthropic. Zoe Hitzig just left OpenAI and wrote an absolutely scorching op-ed in the New York Times. Right, but her critique was slightly different. It was more economic. She talked about the introduction of ads and the complete dissolution of their mission alignment team. She argued that the economic incentives are completely overriding the original mission to benefit humanity. She called it the alignment innovation paradox. It's such a powerful concept. Her core argument is that the economic engine, the desperate need to justify an $800 billion valuation, the massive cost of building a 10 gigawatt store gate is inherently incompatible with safety. As these models become more agentic, as they start executing complex tasks entirely on their own, they naturally become harder to align and control. And the more billions you pump into the infrastructure, the harder it is to hit the brakes. Exactly. You simply cannot spend $500 billion on data centers and then turn around to your investors and say, hey, let's pause deployment for six months to double check the safety metrics. The train has already left the station and it is accelerating rapidly. The financial momentum is just too massive to arrest. So let's try to recap this incredibly fast-moving week for you. We have thinking models that can derive novel theoretical physics and write enterprise-grade code better than most junior human engineers. We have physical infrastructure projects consuming gigawatts of power, reshaping the energy grid in global geopolitical alliances. And we have the largest copyright settlement in U.S. history, officially acknowledging that the very foundation of this incredible tech was built on pirated data. And finally, the actual architects of the safety systems are fleeing the building, telling us the world is in peril and going off to write poetry. It really is a moment of extreme cognitive dissonance. We are building arguably the most powerful technology in human history. We are physically rewiring the planet to power it. And the very people who understand it best are the ones who are the most worried. It really makes you wonder about the long term stability of this stage zero we find ourselves in. Because if the infrastructure is being built to last for decades, the cement is poured, the chips are installed, but the people who understand the catastrophic risks feel they can no longer stop the train. What does that tell you about where the tracks are actually leading? That is exactly the question we all need to sit with this week. Well, on that cheery note, thank you for joining us on this deep dive. We'll be watching the news so you don't have to. See you next time.