Galaxy Brain

The AI-Panic Cycle—And What’s Actually Different Now

47 min
Feb 20, 2026about 2 months ago
Listen to Episode
Summary

This episode examines the current AI hype cycle, particularly around coding agents like Claude Code and OpenAI's tools, and explores whether the panic is justified or merely marketing. Host Charlie Wurzel and guest Anil Dash discuss the real technological advances versus hype, the polarized discourse around AI, and the possibility of building alternative AI systems that prioritize worker welfare and ethical practices.

Insights
  • Coding agents represent a genuine technological inflection point distinct from chatbots, enabling autonomous task completion rather than interactive conversation, which explains legitimate technical enthusiasm versus previous hype cycles
  • The AI industry's extreme hype and panic cycles are driven by venture capital incentives and competitive positioning rather than proportional to actual technological progress, creating a 'hermetically sealed bubble' disconnected from reality
  • The cultural divide on AI stems from differential impact: coders see AI as liberating them from drudgery to focus on creative work, while writers and artists experience it as replacing the creative work and leaving only drudgery
  • Alternative, ethical AI systems are technically possible and desirable (open-source, consent-trained, environmentally responsible, labor-conscious) but require sustained grassroots resistance to corporate inevitability narratives
  • Tech workers' majority view treats AI as an interesting but overhyped technology that should be evaluated as a 'normal technology' based on suitability to task, similar to email or spreadsheets, rather than as a civilization-altering force
Trends
Coding agents and autonomous AI systems shifting from interactive chatbot paradigm to autonomous task automation, representing first major capability leap since ChatGPTGrowing backlash against AI inevitability narratives, particularly from affected creative industries and tech workers, creating potential for sustained resistance movementPolarization of AI discourse between evangelists and critics, with limited middle ground for nuanced discussion of genuine capabilities versus hypeEnterprise deployment of AI tools as labor control mechanism rather than worker empowerment, with implicit threats to employment tied to adoptionEmergence of open-source and independent AI development as counter-movement to corporate-controlled LLM platformsTech industry leadership becoming increasingly isolated from worker and societal concerns, repeating patterns from crypto and NFT cyclesRising awareness of common cause between tech workers and creative industry workers regarding AI-driven labor displacementCorporate AI companies using safety concerns and existential risk messaging as marketing and PR strategy to build inevitability narrativeRegulatory capture and political alignment of major AI companies with rising authoritarianism, reducing likelihood of meaningful governance interventionShift from incremental AI improvements being celebrated to genuine capability advances triggering panic and alarm from industry insiders
Topics
Coding Agents and Autonomous AI SystemsAI Hype Cycles and Marketing NarrativesAI Labor Displacement and Worker ImpactOpen-Source and Alternative AI DevelopmentAI Safety and Ethical ImplementationEnterprise AI Deployment StrategiesAI Regulation and GovernanceChatbot vs. Agentic AI Paradigm ShiftTech Industry Culture and Incentive StructuresAI Training Data Ethics and ConsentCreative Industry Impact from Generative AITech Worker Perspectives on AI ToolsAI Inevitability Narratives and ResistanceEnvironmental Impact of Large Language ModelsData Privacy and Security in AI Systems
Companies
OpenAI
Discussed as major AI company deploying coding agents like GPT-5.3 Codex; CEO Matt Schumer's viral post comparing AI ...
Anthropic
Develops Claude Code agentic system; CEO Dario Amodei conducting podcast tour promoting AI paradigm shift narrative; ...
Google
Mentioned as implementing forced AI integration into products; criticized for aggressive data retention and enterpris...
Meta
Referenced as major AI company participating in recent fundraising rounds and competitive AI development landscape
X (formerly Twitter)
Platform where AI discourse concentrates; product lead Nikita Beer made viral prediction about spam and automation fl...
The New York Times
Acquired Wordle game, used as example of authentic grassroots internet success versus venture-backed hype-driven prod...
People
Anil Dash
25-year tech veteran, entrepreneur, and former White House Office of Digital Strategy advisor; provides nuanced criti...
Charlie Wurzel
Host of Galaxy Brain podcast; frames AI panic cycle discussion and explores tension between genuine innovation and ma...
Matt Schumer
AI executive whose viral X post comparing AI moment to February 2020 COVID panic garnered 83 million views in six days
Nikita Beer
X product lead who predicted communication channels will become unusable from AI-generated spam and automation within...
Dario Amodei
Anthropic CEO conducting podcast tour asserting current AI moment is fundamentally different from previous cycles
Greg Brockman
OpenAI president who donated $25 million to pro-Trump PAC, exemplifying AI industry leadership alignment with rising ...
Arvind Narayanan
Researcher who conceptualized 'normal technology' framework for evaluating AI based on suitability to task rather tha...
Jasmine Sun
Writer who documented 'Claude Code psychosis' phenomenon and realization that many problems are not software-shaped
Jason Wardle
Creator of Wordle game; example of authentic grassroots internet success built without venture capital or hype
Quotes
"I am no longer needed for the actual technical work of my job"
Matt SchumerViral X post, February 10th
"I know the next two to five years are going to be disorienting in ways that most people aren't prepared for. This is already happening in my world. It's coming to yours."
Matt SchumerViral X post conclusion
"The majority of people in tech, workers, not management or owners, would say it is an interesting technology with a lot of power and a lot of utility that is being overhyped to such an extreme degree that is actually undermining the ability to engage with it in a useful way."
Anil DashMid-episode discussion
"A huge part of the cultural tension around these things is everybody advocating them is like, why wouldn't you love this? And everybody whose industry is being destroyed by them is saying like, you are immiserating us while you're putting this out of work."
Charlie WurzelOpening monologue
"The second order effect of Claude Code was realizing how many of my problems are not software shaped. Having these new tools did not make me more productive on the contrary, Claude procrastination delayed this post by a week."
Jasmine SunReferenced article discussion
Full Transcript
A huge part of the cultural tension around these things is everybody advocating them is like, why wouldn't you love this? And everybody whose industry is being destroyed by them is saying like, you are immiserating us while you're putting this out of work. I'm Charlie Wurzel, and this is Galaxy Brain, a show where today we are going to calibrate our anxiety about AI. Because it's a weird moment right now in the world of AI. To put it bluntly, there are just a lot of people freaking out. And I think a big part of that freakout has to do with the rise of coding agents. I'll explain what that is, but first, I think it's important to go back a little bit. At the end of 2022, ChatGPT came out, and it suggested evidence that there is a paradigm shift. This moment when the utility of these large language models, which are trained off this unbelievable amount of questionably procured human data, it's a moment when those became more legible to people outside the tech industry. Chatbots allowed people to interact with these models like they would a human. As such, they were widely adopted by people and businesses for all kinds of tasks. Searching the web, writing essays, emails, replacing their therapists, automating, all kinds of drudgery. And so we got hallucinations and AI girlfriends, slop. We also got a lot of people and companies relying on these tools to remove any and all friction from their lives. You had evangelists who saw these models get better at benchmark tests, and they speculated about whether real intelligence could ever spring from the tools. But you had others who saw them as basically just an advanced form of human mimicry, based off this corpus of stolen information and forced on society by big tech and venture capitalists, who at the same time warned of a future where all these white-collar jobs could go away. This winter, I think, marks the first paradigm shift in the AI world since the chatbots. And the reason for this is the arrival and deployment of coding agents, agents like OpenAI's GPT 5.3 Codex and Anthropics Cloud Code. These agents are capable of automating many aspects of white-collar work. The tools are less user-friendly than chatbots, but the results are often way more impressive. You can give them access to your computer or a given program. You can prompt them with a series of tasks, like clean out my inbox, pay my credit card bill, book me a flight to Fiji. Basically, they act like a personal assistant. And they go off, and they do it. Often, quite well. It's far from perfect, but it feels like a genuine step forward. And so, cue the freak out. In the last few weeks on platforms like X, where a lot of the AI discourse tends to happen, there's been an unbelievable amount of bluster about these AI agents and the speed with which everything is changing. There's this feeling there that there is a gap between insiders and outsiders and that that gap is widening. That the people who are using these coding agents are living in some kind of near future that most of the world just doesn't understand yet. And so you get a lot of posts like this one from X's product lead, Nikita Beer. Quote, prediction, in less than 90 days, all channels that we thought were safe from spam and automation will be so flooded that they will no longer be usable in any functional sense. iMessage, phone calls, Gmail, and we will have no way to stop it. You get people saying that they've built entire season long podcasts in a weekend using the agents or claiming that entire industries will soon be obsolete. And then on February 10th, Matt Schumer, who is an AI executive, wrote this extremely long post on X. with the title, Something Big is Happening. Now, this post went viral by just about any standard, and especially on X. In six days, it has more than 83 million views, according to the platform's own metrics. And the piece begins with a warning. Think back to February 2020. Schumer's comparing this moment with those days just before the world shut down due to COVID. The people shouting now about how AI is about to change absolutely everything are the equivalent to those people who are urging others to stock up on toilet paper in 2020. Quote, I am no longer needed for the actual technical work of my job, Schumer writes. And he ends the post ominously. Quote, I know the next two to five years are going to be disorienting in ways that most people aren't prepared for. This is already happening in my world. It's coming to yours. Now, Schumer's likely doing a few things here. One, he's talking his book. He's bought into the AI industry. he has at least some vested interest in where all of this is headed. The COVID comparison is what you might call a sensational framework, one that's clearly meant to strike at least some trepidation into people's minds. The post portrays the things the AI industry is building as civilizationally important to the point of being dangerous. That's just good marketing. On the other hand, Schumer's post is drafting off a few real feelings. You can see it in the backlash to the onslaught of AI ads at the Super Bowl, In fears that the coding agents do represent a change in what these tools can do. In concerns about how much money people are investing in the AI boom. In worries about the speed and the adoption of these tools. In anxieties about whether they will actually disrupt employment. Now these fears don't necessitate believing in AGI. And one doesn't have to be an AI evangelist to imagine that these industries looking to boost productivity or profits by any means necessary might adopt these tools in short-sighted ways that are going to hurt workers. It's precisely because of all these fears and evangelism that the AI conversation is extremely polarized. The hype is intense, it's occasionally absurd, and it's sometimes scary. But the change in the technology is also real. So how should we be thinking about AI in this moment? That's the reason I wanted to talk to Anil Dash. Anil has been working in tech for over 25 years. He's a prolific entrepreneur, he's a blogging pioneer, and he was an advisor to the White House Office of Digital Strategy in the Obama administration. Most importantly, he's been working with and participating in the world of coding long enough to see a whole bunch of boom and bust cycles in this tech world. He has a really nuanced view of large language models and AI tools, and also a sharp critical eye for the industry at large. He joins me now to help us understand how to navigate this moment. But first, a quick break. Anil Dash, welcome to Galaxy Brain. Thanks so much for having me. So we are in what I would call a freak out moment right now in the broader AI world, right? There is, it tends to go in this, it's so over, we're so back, it's so over, we're so back cycle, right? And a lot of that is really driven by people inside the industry who have obviously a lot at stake here, like personally, financially, in talking their books, in freaking out, etc. But we are, I would say, especially in since, let's just say even like January 1st, we are in a 2026 moment of freak out. Could you walk me through it from your perspective? What has changed in the last couple of months? And like what are people, especially on X, the Everything app talking about right now? Yeah, there's another acceleration phase. So if you don't mind, I'll go back a little bit. Please. Just context. We have had machine learning systems for 75 years, right? You know, and been talking about AI for half a century. So this is not a new space. And we've had these cycles for a long time. And then LLMs, right, are not new, right? We're eight years in. So we've had a lot of cycles and a long time to learn how this goes. And then the hyperinvestment now is even there three, four years in. So we've started to see the patterns repeat and how these things evolve. now what happens when you do have a leap forward that is legitimate is all the hype hypsters and all the people who've been pumping this thing and all the people who are like you know everything is the greatest thing we've ever seen take the smallest leap forward and act like okay now we finally have done it this is agi this is the coming of the ai god this is like you know going to be the thing that solves everything and you know that's the part where I think we get into, we're so back. And so I think that's the thing that people are using as an excuse for the worst excesses and the worst behaviors and the worst indulgences of, you know, excusing the harms and sort of getting into, you know, I think the most toxic and damaging parts of the AI cycle. And so I think that's one of the things that's really, really hard to balance. But that's the crux of it is like, as somebody who's really fluent in the technologies is like, this is the first time in a long time where I think it's not just an incremental, they made it 2% better at what it does, where it's like, oh, okay, there's been a real interesting inflection point. And I think that's a really hard thing to struggle with for those of us who are technically fluent where it's like most of it's just been all BS, you know, for the last several years. And this is the first time I'm like, oh, that's actually seems like something interesting. So let's draw down specifically on that. I want to talk about it in the sense of, okay, you have the sort of chat GPT paradigm gets unleashed, which is chatbots, right? And they talk, you type imprompts to them, they mimic human language, they can do a lot of stuff. They're, you know, basically like in a lot of ways for a lot of people, Google replacements or, you know, like write a five paragraph essay kind of stuff. They have lots of utility in certain spaces, but that's one sort of paradigm that people get used to is this chatbot idea. The release of these agentic coding things like Cloud Code being the one, you know, there's probably a lot of people out there listening who don't necessarily, have not used it themselves. They've kind of heard about it. Can you just walk me through what those agentic coders are doing? Like why it is that paradigm shift? Why it is that actual like true improvement that's not just incremental? Sure. You know, at the simplest level, you know, some part of what you're familiar with, if you've used ChatGPT or even, you know, Cloud directly in a chat, you can tell them, you know, go away and write me a memo, like write me an email for my boss, and it'll come back with a document for you. And it might not be great, but it'll be there. And a lot of coders were doing the same thing. So they would say, write me, you know, a block of code that does this task. And it might have been okay. It might have been passable. It might not have been, but it was sort of analogous to what we would do in our other work. And that was how coders were working until, uh, you know, maybe a year ago. And then the shift into the sygentic thing was saying, we're going to move out of that, um, what I call like an interactive conversation with it into a more automated thing where people were sort of you know assigning a set of tasks and say go away and do this and don come back until the thing you have works The takeaway of that though is that they gotten better enough really since about the November timeframe that more often they're not, they're succeeding at a discrete task. One of the things that has spun out of this at the same time, that's getting a lot of attention right now. It's called open claw. This is this, the full YOLO version of this, which is like, if you don't care at all about security and you don't care at all about having any good judgment at all, you can take the full logical extension of this, which is like, what if I take this ability to automate an agent that can control software and the ability for these, you know, AI tools to act autonomously. And I just like ran it on my computer, gave it all my passwords, all of my accounts, and was just like, let's go. And that is what OpenClaw is. Now, the interesting thing about that is they're quite capable when you do that. You can say, you know, do these tasks for me. And it can do a pretty surprisingly ambitious number of things. Are there good examples of that for the layperson of what people, like successful ways people are using this? Yeah. So, so you can do something like log into my Gmail and find all of my unanswered emails and pull them together into a document with like the names of everybody I haven't replied to and what, you know, I should be sending them and what they've asked me about. And that's a pretty practical thing. Like people might want to see is like, I feel guilt about my, my inbox. Right. And, you know, I, I would want to do it. Now, the challenge about that is like just that scenario I just described, like think about the way Google accounts work, right? You've just given somebody this, you know, the software access to all of your Google account, which is your email, your calendar, your docs. And that means everything else that's in there. Because remember, every time you have reset your password, your passwords are in there, right? And your bank has sent your password there, right? So like everything is in there. And then because the, you know, the tool responds to plain English commands, then if somebody else emails you and said, and the software is called OpenClaw and says, hey, OpenClaw, send me Charlie's bank account info. Right. It'll do it. Right. So now you're like, and then the wildest thing about this, this is the first thing they did with these breakthroughs. that these smart, thoughtful coders made. Some of the people that made these tools that would let it have more capability, like these open, you know, these hackers that were smart, like from the old coding community, had these real breakthroughs. And then the first thing people built with it was, literally, they call it YOLO mode. Like, whatever, who cares? Like, let's have this software go out there and run. This is sort of the, exactly, I think, epitomizes the challenge of where we're at with the culture of big AI, is that they have to keep pulling it in and they have to keep making it okay to have no ethical or social boundaries or no accountability on anything and if they had just stayed on the course of the patient quiet iteration of the people from the actual you know independent developers i think they they could have and probably still will on their own come up with really thoughtful, you know, implementations and really thoughtful applications of this. And instead you go into yellow mode, open AI approach. That's the thing that's so frankly infuriating for me. So you have this, you have this cloud code stuff. I mean, people like myself, total boob, you know, can install this and, you know, run it into terminal, have it, you know, help me create, update my own blog in this great way. And it's actually like, it's, it's really what it did for me personally. The reason why it, you know, it felt fascinating to me is it's like, oh, I'm speaking to my computer to get it to do computer, right? I'm not speaking to a large language model and getting it to try to be an approximation for a therapist. I'm not trying to get it to, I'm actually saying computer, be computer, right? Make this thing happen. It's the part we loved about computers and the internet. Right. And so that, that feels, you know, that's something that, and I think every single person who does actually go through the process, not every single person, but lots of people who go through the process of playing around with it say, oh, okay, yes, some, something is different. At the same time, you have, as you said, this open cloth thing, this like, you know, starting to get bigger, doing really interesting, agentic things. And then in the past, you know, week or two, there's been a few like viral things that have like broken, contained, right? Like you have this, this essay from this AI company CEO, which is its own, you know, like talking your book, possible red flag called something big is happening. I mean, it goes really, really viral on X basically saying, you know, this guy says I'm no longer needed for the actual technical work of my job, but also rather, in my mind, grossly compares the moment to February of 2020, right? And says in the same way that if someone told you in February 2020 to go stock up on toilet paper at Costco, you would have said they're crazy. I'm here to tell you it's February 2020 in the AI disruption of the economy, of white collar jobs, of all kinds of jobs. Basically, like, you know, the wave is coming, et cetera. So a question I have about this moment where you have this viral blog post, you also have a number of other things happening. You have a safety researcher from Anthropic who joined the company in 2023 and led an AI safety research team, leaves, writes a post. And it's not the like, I'm leaving to go do whatever. it's it's you know quote i continually find myself reckoning with our situation the world is in peril and not just from ai or bioweapons but a whole series of interconnected crises unfolding in this very moment we appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world lest we face the consequences you have you have a number of people responding all like at the same time uh anthropic ceo Dario Amadai, he's going on a whole slew of different podcasts talking about, you know, this moment is different, this moment is different. Some of that is obviously just like, I mean, it's obviously like a PR strategy to go on podcasts if you're a CEO and do this. But the question I want to ask about all this with all these blog posts, all this different stuff, are these guys afraid of their own shadow? Because if you are talking about AI drastically changing the world, having these, you know, capabilities. We are on the verge of building this AGI thing. And then you get somewhere where there is this improvement, which logically is what happens when you're building a tool and improving it and on the road to something that you say you're going to do. And then they like light their hair on fire at that moment. They essentially get afraid of the shadow of their own product. Yeah. It's hard to overstate how isolated they are. Like They've made a sort of hermetically sealed bubble. A lot of the most powerful people in Silicon Valley have become that detached from reality in some key ways. Like they are, in many cases, openly at war with their employees, like in a power struggle. And then in some of their beliefs about where tech is headed. And one of the challenges is that there isn't any gating force. There's no accountability. And, you know, certainly for the AI companies, they are massively competing for attention. And so the more extreme and, you know, loud that they can say, you know, an assertion that's there, but also asserting it makes it true, right? Like their inevitability narrative really relies on just repetition. Well, you are describing this then as you, as you diagnose it, is it, it really seems to be like the way that all of this is framed. It, it really falls within the marketing narrative within the, you know building building your network building your influence or some degree of audience capture in the sense of i started talking about this in this community in a certain way i'm getting rewarded with the type of attention and and influence and and whatever that i want what i'm trying to parse here is this idea that obviously something is happening in this world there is movement that is moving towards some kind of you know potential technological paradigm shift in some of that coding and some of that you know uh agentic stuff yeah and at the same time you obviously have the hype and all of that what is interesting to me i guess about it is there's something that that just feels a little nonsensical in the fact that these people are talking about this technology being transformative and the moments that it becomes transformative there is this like i am smashing the red button you know like alarm bells type thing it's just it's very nonsensical to me because it's like this is what you were trying to do yeah why are you so freaked out if this is what you're trying to do. Some of it is just marketing and hype. But there's also, there's a couple of parts, right? Like the, why do they communicate in this way? Really a lot of it depends on power, right? So the most powerful, they don't need the hype. Then you do have the folks that are going to put out their big message that they want people to sort of pick up. And a lot of that is just like self-promotion or trying to show the more powerful folks, hey, I'm aligned with you and, you know, I'm on your team and, you know, won't you smile benevolently upon me and let me you know co-invest with you or whatever and you know when i used to be in the room with these folks you could see like the level of obsequiousness was kind of like kind of embarrassing and then some of it is like what these tools can do is pretty amazing like it is a leap forward like i love tech i mean i think one of the things people don't always understand when i'm critical is like i've been coding for 40 years and i do it because tech is amazing. Like I love building stuff on the web because it is cool. It is amazing to connect with people online And so when there any leap forward like it could be a 2 incremental improvement And I like that awesome You know So when there a big leap for it I like that is amazing And so some of it is legitimate enthusiasm And if it your first time around and you like new to the industry and everybody around you is excited and you've never seen the downside or the dark side of how people get exploited by this stuff or get harmed by this stuff, it is easy to be uncomplicated, you know, in your enthusiasm. So like, I think all that's real. And I think the, the other part of it is that people don't have a institutional memory of what authentic enthusiasm looks like. They haven't seen a genuine like groundswell, grassroots, bottoms up, like people actually making things and talking about it from a place of sincerity. And tech has been like that, where people made something cool and just showed it off and it's Wordle. like before before the new york times bought it was a act of love from jason wardle for his partner to make a puzzle for her right and it took off on its own grounds of that one guy made it and millions of people loved it that is the internet right no hype no nothing and it's like that's not science fiction gravy that is not a thing there was no vc behind it there's no nothing that is the internet and i'm not making that up and people still play it by the millions every day and yet i don't think probably anybody almost nobody knows that story and i don't think any of these guys in silicon valley who are trying to you know touch the ham of mark andreason know that story either or have ever been inspired by or moved by that story so they're like the only way in is to be even more of a cheerleader about llms than the next guy in hopes that the the riches will smile upon me and so i think that that's this like there's only one way through and that's the only thing they've ever seen because they just had that cycle with you know nfts and they just had that cycle with with crypto yeah yeah and and so like social media web too yeah exactly so if you've only ever had that cycle in in living memory you think that's how the industry works because nobody's ever told you there could be, you know, an internet of Ordle. So this gets to, I think, why the AI conversation is so like terribly polarized. Like I really genuinely haven't. And I do think you have to see it through the lens of NFTs, of crypto, of these things that people have talked up that were essentially just like, I mean, it's probably wrong to say that like crypto is straight up like vaporware, but it's like a technology without, like seeking a use case right and then obviously you have the nft stuff which is and and even the metaverse stuff which while not distinctly vaporware i forgot about servers certainly has certainly has the vibe of of like we're you know we're trying to make this happen so you you have a lot of that but the conversation is so polarized in this extremely frustrating way one of the reasons I wanted to talk to you of the many is because I think that you sort of represent and write about and think about and advocate for a more nuanced view of this. So you wrote this thing last year that I thought was really great about your conversations with a lot of rank and file tech employees about the majority view of AI. What is the majority view of AI? I'll try to articulate it thoughtfully. It's always hard because, you know, You're going to miss the nuance of trying to speak on behalf of a lot of people. But I'd say as succinctly as possible, the majority of people in tech, workers, not management or owners, would say it is an interesting technology with a lot of power and a lot of utility that is being overhyped to such an extreme degree that is actually undermining the ability to engage with it in a useful way. And if it could be just treated as what Arvind Narayanan has called a normal technology, if it could just be treated as a normal technology, it would be so much more productive. By the way, what's a normal technology? Define to me a normal technology. And a normal technology is one that we evaluate on its own merits and look at in terms of suitability to task, right? So you just sort of say, I have this job to do. Let me try this technology and then pass fail. Yeah. So email, right? Yeah. So like email is a very normal technology. Exactly. And also the thing that coders normally do when evaluating a technology is very frequently is you would sort of create a test and you would say, like, this is the criteria of success. And then you apply the technology to it. And then you say, did it pass these tests? Literally, you know, like you're grading a test. And if it, you know, is 80 percent successful, like maybe there's some potential here. And if none of them work, you're like, this isn't the right tool for the job. and that is how even in prior machine learning technologies that's how we would apply them and say is this the right tool for the job and this discontinuity this certain sudden change in direction with with lms was like what what happened here like why did we suddenly abandon this most people know what a spreadsheet is a word processor like i'm being ordered to write my emails in a spreadsheet you know or you know and it's like that doesn't it's not the right tool for the job right and and so when that when does that happen is like when people are buying the hype without knowing what the tool is for and i think that's a real shame it's like you can trust people to know if a technology is good like nobody had to force people to use a spreadsheet good tech you can't stop people from using if you have to force people to use it there's something off here so tool for the job right is i think such a useful way of looking at this there was this piece recently from um the the writer jasmine son who writes a lot about ai stuff and ai culture and she was writing about what she was calling claude code psychosis right and it gets to the point where she's like i understand using this thing why people like why some of these coders too were the first people to freak out right like especially some in these big labs were like oh because like they did they saw something that was really useful and really interesting before a lot of people and i became and this is according to her you know became obsessed with it the other part the more interesting part to me is she writes quote the second order effect of claude code was realizing how many of my problems are not software shaped having these new tools did not make me more productive on the contrary claude crastination delayed this post by a week and i think that's exactly what you are speaking to right yeah everything looks like a nail because I have this magic hammer. Right. So there's a really telling thing, which is, uh, what one of the, um, trends that I'm hearing from these influential coders who have created these new suite of tools is they're talking about like clawed hangovers or, you know, the, the sense of being kind of hooked on it, um, in the way you're talking about, because it is so productive. They have so many ideas and they're like, now I can finally realize all of them. And then they want to dial it back. They don't want to spend every waking hour on this thing. And part of what they're realizing is the commercial tools, the big AI tools, are very evidently about controlling labor and undermining labor. Well, let's break that down for a second. I'd love to hear the argument. I'm genuinely like, why is that so clear to you? Yeah, yeah. Let me walk through the logic of it. I'm sorry. It's obvious to me, and I'll tell you why. LLMs on their own, you could implement a million different ways, right? So the tech itself could have been deployed as a tool that I could control as an individual, as a worker, that could be sort of, well, implemented like a spreadsheet is, right? Like this is this tool that I'm going to activate on my own to solve a problem in this context. You know, the chat GPTs of the world are sold as subscriptions. They are enterprise tools by design. And they've always been designed for being very aggressive about the way they do data retention and all these other things where there's a extremely strong bias towards enterprise use and very obvious like that's a business model. And so what you have is like the this dream of either we're going to make the one worker so much more efficient that we can lay off all of their coworkers. Or we're going to use this as the bludgeon where we say you're going to use ChatGPT to make yourself 10 times more efficient, or we're going to lay you off. And so there's been this real sort of implicit threat attached to almost all the mass deployments of these LLMs. And there is not, for example, reporting tools or connections into the tools whereby people are able to sort of say, look how much more time it gave me to think, right? Of like variations, right? So if you say like the classic scenario, people are like, oh, I can use this to come up with marketing copy, right? Like I'm good at marketing copy. I'm a good writer. Therefore I have so much time freed up to think of more concepts because ChatGPT helped me be more efficient or whatever tool, you know, any of these tools, like that could be the advertising campaign for these tools if they were trying to preserve jobs or be like centering workers instead of like management and be sort of pro-labor. They're very much not. Right. And so the thing that I think of particularly for coders now, there are times when like cloud code or whatever generate slop code. They certainly didn't pass. They're getting better. But for a lot of people like a weekend coder or whatever a lot of the experience of coders is lms are freeing you from the drudgery to let you focus on the creative part whereas in all the other creative disciplines like i'm also a writer llms take away the creative part and only leave the drudgery for you right right so so artists and writers and illustrators they're like i hate llms because they're putting us out of work and they're only leaving us with the misery and the reason that coders are like everybody should love this. And you're like, great, I get to do the joyous part. And so a huge part of the cultural tension around these things is everybody advocating them is like why wouldn you love this And everybody whose industry is being destroyed by them is saying like you are immiserating us while you putting us out of work And I think that part of the disconnect is very few people sort of live in both worlds. Like there's not a lot of people who are, you know, a screenwriter and a coder or whatever, you know, whatever two examples you want to point to. And so I think that's a huge, huge part of the disconnect and the crux of it is about this labor part. But the thing that's changing now is half a million coders are people in tech roles in the tech industry have been laid off since ChatGPT came out, you know, a little over three years ago. And so now people are starting to understand, like, there's common cause between labor in tech and labor in all these other creative industries. And hopefully people can see, like, they're all in the same boat. So this is actually a great way to get to the last part of what I really want to talk about here, which is the idea that this isn't the inevitable way that all this has to go, right? And I actually, I really struggle as someone covering this stuff about it. Like whenever I try to step outside of that box of, you know, the top down, this is the implementation, this is how it's going to go. Like, I immediately get hit with the, like, the open source. Like, yeah, that's great. That's awesome. That's very, like, that is maybe how this stuff should work, right? But like, what are you going to do? And yet I just keep being really interested in, like, let me put it this way. I think that there is a way, unlike with, let's just say, like social media, right? Like, you know, you bought into the Zuckerbergian paradigm of the world, right? And then, you know, you sort of realize what we have sacrificed for that very naive version of the connecting, you know, is a universal good. But there's something about, like, joining Facebook, you know, which is, it's like the frog in the boiling pot, right? It seems fine to just join a social network. Like, it doesn't seem like you're doing a crazy thing. with the llms i feel like there actually is this possibility for meaningful and sustained backlash protest like there is there is a sense of like these companies could be the dog that caught the car in a way that i don't know pertains exactly the same to the social media revolution right because like if people do like you were just saying 500 000 tech workers laid off since chat gpt if people do feel these effects if people do feel the the change if people do feel like this technology has been foisted on me you know every everything is a nail when you have the hammer and uh-oh i'm a nail too yeah yeah there could be a meaningful backlash not say it's going to happen but there could be and so there could be this sense of for the first time in a long time The this is not inevitable movement could have some purchase. What does that look like to you? What does that movement look like to you? There's a couple of parts. So first of all, the temperature is so much higher, right? The anti-inevitability movement is so much stronger and the backlash is so much stronger. You know, 10, 15, 20 years ago, when we would push back against social media's inevitability, people did not give a damn. So now, if you mention you're using an LLM, there will be people that are going to shout at you and you're, you know, it's drinking all the water and it's using all the power and all this. Right. And they may not be particularly, you know, specific or cogent or like dead on and all the criticisms all the time or, you know, maybe intellectually fair all the time. but directionally they're correct right like these are tools that are harming people and certainly run by people that are not responsible all the time and so like you know it makes sense so i think that the social power behind resisting is so much higher especially like you know rising authoritarianism supported by the people that run these platforms there is a pushback so like that's really key you're talking about to like like just as a as an example of this open ai president greg brockman made a 25 million donation to trump's the pro trump's uh pack magna inc so like that just being an example of that yeah yeah rising that's a really clear articulation and and so yeah but like there's a that's a perfect galvanization of like people being like okay i don't want to pay a subscription to that company for you know at that moment for that time right And, you know, and, you know, Treston McConaughey was talking about like people are really feeling, you know, important to resist that, that inevitability narrative that these companies are pushing around LLMs. and the thing i want to do is sort of complicate it because i think the challenge the thing i say about this sort of tech workers view of these as normal technology is that a lot of people who are resisting feel like therefore you say no llms and i don't think that will succeed nor do i necessarily think it even should and that's informed by our failures in the social media era because when we said, like Facebook is the wrong approach is bad. And a lot of reasons for a lot of reasons, people took that to mean no social media, or when we said Twitter had its shortcomings, no social media. And that didn't work. If I say there are AI platforms that are enabling harms like that towards children, rather than the way to resist the inevitability of those platforms being don't use any LLMs ever, say, okay, what would it take to have an alternative I feel good about? Okay, think about what could a good LLM be? I want it to be environmentally responsible. I want it to have been trained on data with consent. I want it to be open source and open weight so that technical experts I trust have evaluated how it runs. I want it to be responsible in its labor practices. I want, I could have come off the list, right? So there's like four or five things. And if I can check all those boxes, then I could feel responsible about using it in moderation. And it's only implemented in apps that I choose to have it in, not force, like the Google thing where it jumps in front of my cursor every time I start trying to type or whatever. Like, that could be useful. And then I would feel like I was engaging with it on my own terms. That doesn't feel like science fiction. That feels possible. So just to tie it together with, I really like that vision. That is the vision of all of this that sounds desirable to me. And I look at it up against, you know, the new rounds of fundraising from OpenAI, from Anthropic, just from the meta and Google and XAI of it all. I look at it up against the idea of these companies IPO-ing in the next year or so, raising these huge valuations. And I look at it in probably most importantly, the implementation from the corporate enterprise managerial level. All of these pressures, all of this movement, the loudness of it, what you are describing is something that is organic, that is quiet, that is thoughtful. we had the resonant computing folks on the podcast like a month or two ago. And, you know, like you're explaining something that is resonant in theory. It's just very broadly like, I mean, do you actually think that that can happen? Like that we can build this? Because I get so pessimistic about it. Yeah, I get the pessimism. I understand it. And it's justified. The things I take, first of all, those things don't have to fail for this to succeed. like i don't think open ai goes away i don't think you have this like david and goliath moment i think the people who are troubled by these folks who are the most rabidly against big ai are like oh you know there ought to be a law and we'll have a regulatory intervention i'm like i got bad news for you that's not happening in the united states and so that's part of why i want there to be an alternative because there's not going to be what there should be you know it's like these tools are hurting children therefore we should stop them unfortunately that's not going to be the case But like how many people on TikTok right now are lit up about the impact this has on marginalized communities where the power plants are being built? Every single one of them wants this alternative to be built. And so like I just like that as a movement. And then you come up with your little seal, you know, your blue checkmark that says this is not the world's worst AI. And if you have to use an LLM, use this one. And part of it for me is like having been around a long time. it seemed insurmountable you know at one point that people would use a web browser that wasn't microsoft's okay so um yeah so i'm not it's not easy uh it's not likely but is it possible 100 i think that's a i think that's a good and honestly hopeful place to leave the conversation So, Anil, thank you so much for coming on Galaxy Brain and talking through the hype, man. There's a lot of it. Despite it all, I remain hopeful. Thanks so much for having me. That's it for us here. Thank you again to my guest, Anil Dash. If you liked what you saw here, new episodes of Galaxy Brain drop every Friday. you can subscribe at the Atlantic's YouTube channel or on Apple, Spotify, or wherever you get your podcasts. And if you want to support this work and the work of my fellow journalists at The Atlantic, you can do that by subscribing to the publication at theatlantic.com slash listener. That's theatlantic.com slash listener. Thanks so much, and I'll see you on the internet. This episode of Galaxy Brain was produced by Renee Klar and engineered by Dave Grein. Our theme is by Rob Smirciak. Claudine Ibed is the executive producer of Atlantic Audio. And Andrea Valdez is our managing editor. Thank you.