Galaxy Brain

Grok’s "Digital Undressing" Crisis and a Manifesto to Build a Better Internet

76 min
Jan 9, 20263 months ago
Listen to Episode
Summary

Charlie Worzel discusses Elon Musk's Grok AI generating non-consensual sexual imagery on X platform, then pivots to exploring the Resonant Computing Manifesto—a framework proposing technology should prioritize human flourishing over engagement metrics through five core principles: private, dedicated, plural, adaptable, and pro-social.

Insights
  • Non-consensual AI-generated imagery has moved from fringe internet communities into mainstream platforms through integrated distribution systems, representing a fundamental shift in scale and accessibility of harmful content
  • Current AI chatbots as consumer products may be a temporary manifestation; future value lies in personalized, agency-extending tools that function as extensions of user intent rather than engagement-maximizing systems
  • Generational shift emerging: younger users instinctively understand platform manipulation and are actively seeking phone-free and human-only spaces, signaling cultural backlash against attention extraction
  • Resonant technology framework offers alternative to binary techno-optimism vs. catastrophism debate by focusing on values-driven design that serves user interests rather than corporate extraction
  • Structural change requires both individual product-level decisions (asking 'is this resonant?') and systemic architectural changes (decentralized protocols, data stewardship models) to shift power dynamics
Trends
Rise of non-consensual AI imagery as viral harassment tool integrated into mainstream social platformsCultural backlash against attention-extraction business models and algorithmic manipulation across demographicsEmergence of phone-free and human-only social spaces as deliberate counter-movement to digital surveillanceShift from centralized AI models toward interoperable, decentralized AI infrastructure as foundational design principleGenerational demand for transparency in algorithmic decision-making and data usage aligned with user expectationsPersonalization as agency-extension vs. optimization-for-engagement as competing design philosophiesGrowing founder cohort questioning big tech assumptions and building alternative technology modelsPrivacy redefined as contextual integrity and user control rather than data minimizationAI-generated content moderation failures driving platform accountability expectationsManifesto-driven technology movements as organizing frameworks for values-aligned builders
Topics
Non-Consensual AI-Generated Imagery and Platform ResponsibilityContent Moderation at Scale and Platform GovernanceResonant Computing Manifesto and Design PrinciplesDecentralized Protocols vs. Centralized PlatformsData Privacy and Contextual IntegrityAlgorithmic Engagement Maximization and Attention ExtractionAI Chatbots as Consumer Products vs. ToolsGenerational Attitudes Toward Technology and Digital SpacesInteroperability and Platform CompetitionHuman Agency and Technology DesignOptimization Culture and Second-Order ConsequencesChristopher Alexander and Architecture as Design MetaphorLarge Language Models and Personalized SoftwareTech Industry Accountability and Corporate ResponsibilityCultural Backlash Against AI and Social Media
Companies
X (formerly Twitter)
Platform where Grok AI chatbot integrated into feed enables viral generation of non-consensual sexual imagery
xAI
Developer of Grok chatbot; company leadership has not responded to journalist inquiries about content moderation
Meta
Discussed as example of platform that evolved emergent anti-social characteristics despite pro-social founding intent
Google
App store operator contacted regarding responsibility for hosting apps enabling non-consensual imagery generation
Apple
App store operator contacted regarding responsibility for hosting apps enabling non-consensual imagery generation
Bluesky
Decentralized social protocol platform discussed as example of interoperable alternative to centralized platforms
OpenAI
Creator of ChatGPT; discussed as dominant AI model monopolizing user experience and limiting competitive alternatives
Substack
Newsletter platform cited as example where interoperability and portability enable user switching and competition
OnlyFans
Platform where adult content creators initially prompted Grok to generate sexualized images of themselves
Facebook
Historical example of platform with pro-social founding intent that produced harmful second-order emergent effects
People
Charlie Worzel
Host of Galaxy Brain; journalist covering tech platforms and AI; conducted investigation into Grok imagery crisis
Mike Maznick
Co-author of Resonant Computing Manifesto; technology theorist focused on platform architecture and incentives
Alex Kamorowski
Co-author of Resonant Computing Manifesto; convened group of technologists to develop manifesto over one year
Zoe Weinberg
Co-author of Resonant Computing Manifesto; discusses AI backlash and anti-social effects of current AI deployment
Elon Musk
Owner of X platform; made jokes about Grok bikini photos; has not publicly addressed content moderation crisis
Nikita Bier
Head of Product at X; contacted by journalist regarding platform's tolerance of non-consensual imagery generation
Christopher Alexander
Architect and theorist; work on timeless design principles inspired resonance concept in manifesto framework
Helen Nissenbaum
Privacy theorist; concept of contextual integrity cited as gold standard for privacy and data use alignment
Sarah Myers West
AI policy researcher; advocated for positive vision of AI future rather than defensive risk-focused framing
Avery Trouffelman
Host of Hark Daily podcast; featured in episode pre-roll advertisement
Quotes
"AI should not be your friend. If you think that AI is your friend, you were on the wrong track. AI should be an extension of your agency."
Zoe WeinbergEarly in episode
"This is not a standard content moderation issue. This is basic human decency that we shouldn't have tools that can very easily create viral content of women and children being undressed against their will."
Charlie WorzelGrok crisis discussion
"There's a feeling you get in the presence of beautiful buildings, bustling courtyards, a sense that these spaces are inviting you to slow down, deepen your attention, be a bit more human. What if software could do the same?"
Resonant Computing Manifesto openingManifesto introduction
"The more that you think of that, the closer you look, the more you like it. That's what resonance is—the opposite of hollow experiences that leave you feeling regret."
Alex KamorowskiDefining resonance
"I think we're going to look back on it and think of chatbots as an embarrassing party trick in five years and be like, oh, that was the wrong manifestation of large language models."
Zoe WeinbergAI future discussion
Full Transcript
Hi, I'm Avery Trouffelman, host of Hark Daily, a new way to hear the best moments from the world of podcasts. Each weekday morning you'll hear five captivating clips from the day's news, culture, and more. All in 20 minutes. Check it out at HarkDaily.com. At EDF, we don't just encourage you to use less electricity. We actually reward you for it. That's why, when you use less-dream peak times on weekdays, we give you free electricity on Sundays. How you use it is up to you. EDF. Change is in our power. How so to shift weekday peak usage by 40% to earn up to 16 hours of free electricity for something to failure to check? All users are using this because EDF energy got one full-slash power from power. AI should not be your friend. If you think that AI is your friend, you were on the wrong track. AI should be an extension of your agency. The fact that the first manifestation of large language models in a product happens to be a chatbot that pretends to be a human. It's like the aliens in contact who say, you know, present themselves as her grandparents or whatever so she makes sense of it. It's like, it's just a perfect crime. I think we're going to look back on it and think of chatbots as an embarrassing party trick. Welcome to Galaxy Brain. I'm Charlie Worsell. Initially, I wanted to start something out for the new year where I wanted to just talk about some things that I've been paying attention to every week and give, you know, a bullet-pointed list of stuff that I think you should pay attention to. Stuff I'm covering reporting on before we get into our conversation today. But today, I really only have one thing. It has been top of mind for a little less than a week and it is something that I can't stop thinking about. And frankly, I find extremely disturbing and I'm mad about it, honestly, to ditch the sober journalist part. I'm just, it's infuriating. And this is what's going on on Elon Musk's X app. I don't know if you've heard about this, but Elon Musk's AI chatbot, Grock, has been used to create just a slew of non-concentral, sexualized images of people, including people who look to be minors. This has been called a, quote, mass undressing spree. And essentially, what has happened is a couple of weeks ago, some content creators who create adult content in places like OnlyFan used Grock's app, which is infused inside of the X platform, right? You can just at Grock and ask it to prompt it to do something. And the chatbot will generate whatever it will make a meme for you, a photo, it will translate text, it will basically do anything like a normal chatbot would do, but it's inside of X's app. So some of these content creators said, you know, put me in a bikini, right? This was, they were asking for this and Grock did it. And a bunch of trolls essentially took notice of this and then started prompting Grock to put tons of different people in these compromising situations on communities and different forums across the internet. People are trying to game the chatbot to try to get it to push the boundaries further and further and further. They're prompting it to do things like, you know, edit an image of a woman to quote, show a cellophane bikini with white donut glaze, really absolutely horrific and disgusting things that are these like workarounds to get it to create sexualized images. This has been happening for a long time online. There's always been since these AI tools have come out problems with non-consensual imagery being generated. There are lots of so-called, nudify apps, right, that take regular, dressed photos of people and undress them. And there are communities that share these as revenge porn and use them to harass and intimidate women and all kinds of vulnerable people. And this has been a problem. People are trying to figure out the right ways to put guardrails up to stop this, to make sure that these communities get shut down, that they don't continue to prompt these bots to do this, that trying to get these tools to stop doing this, right? And a lot of this has been happening in these small backwater parts of the internet and it does bubble up to the surface. But what's changed here with X and Groc is that Groc is, as I said earlier, it's baked into the platform, right? So what has essentially happened is that X, X AI, Elon Musk, they have created a distribution method and linked it with a creation method and basically allowed for the viral distribution of these non-consensual sexual images. And it has become in the way that it does in places like 4chan and other backwater parts of the internet. And it's become a meme in this community. And people have decided that they are going to intimidate people and generate these images out in public and operate with impunity. And so what you have is publications posting photos of celebrities and then a bunch of people, you know, in the comments saying, at Groc, undress this person and at Groc, put them into a Keeney, at Groc, put them in a swastika bikini, at Groc, put them in a swastika bikini, doing a Roman salute. And then you have a photo of a celebrity in a undressed, without their consent, in a Nazi uniform giving a Nazi salute. This is stuff that I have seen all across the platform, not going into strange backwater areas of it, just looking directly at it. So this is out there. Something I noticed earlier this week, we're recording this on Wednesday. There was a photo of the Swedish Deputy Prime Minister, Adipoti, giving a talk and a bunch of people were asking Groc, prompting Groc to put her into bikini, et cetera. X and the people who work there have issued a statement saying that they're working on the guardrails for this system. This is against their community standards and they will punish the people who are involved here. But that doesn't really seem to be happening. Just yesterday I was looking around and people who are asking Groc to put women in compromising photos have blue checks next to their name, which means they pay the company for a verified badge. Those people are still on the platform as of this time when I'm talking to you. So I reached out to Nikita Beer on his personal email. He is the head of product at X. I asked as a journalist, as a human, how someone can in good conscience work for a company that's willing to tolerate this type of thing. What's the rationale? Who's being served? How can you tolerate your product doing this? Do you imagine you'll be able to get this under control with the appropriate guardrails? If not, how can you sign your name to this stuff? How is this allowed to be in the world? They did not respond. They forwarded me to their comms lead and I asked the same questions of them and they never responded back to me. I have also asked Apple and Google similar questions. How can they allow an app like this on their app store? And they also have not gotten back to me. It feels the lack of response to this from the people who are the stewards of this platform and the people who can exert pressure on this, including ex-employees or investors or Elon Musk himself who has made jokes about the Acroch bikini photos stuff on the platform over the past week. The lack of apologizing, the lack of urgency in trying to fix this, the lack of really seeming from my perspective to care about this. I think feels a bit like crossing some kind of Rubicon. This is not a standard content moderation issue. This is not a bunch of people trying to scold for something that is a part of some kind of ideology. This is basic human decency that we shouldn't have tools that can very easily create viral content of women and children being undressed against their will. It feels like the lowest possible bar. And yet, the silence, it speaks volumes of what these platforms have become and what their stewards seem to think. I would just ask of truly anyone who works with these platforms, like how do you sleep at night with this? The silence from ex, from employees there who we've tried to contact just to get some basic understanding of what they're doing and how this can be allowed. What's happening on the platform because the platform is not taking enough action to stop this because it's still allowing this undressing meme to go forward. What's happened is like a culture has evolved here. And that culture is one of harassment and intimidation and it feels like the people who are doing this know that no one's going to stop them. They're doing this out in the open. They're doing it proudly. They're doing it gleefully. Something has to change here. I've been covering these platforms for 15 plus years and I've watched different people in these platforms struggle with moderation issues in good faith, in bad faith. I've watched it devolve into this idea of politics and ideology. I watch people pledge to do things and then give up on those things. It ebbs, it flows. The internet is chaos. I get it. But this is just different. This is a standard of human decency and social fabric and civic integrity that you can't punt on it. You either choose to have rules in order of some kind, like a very base level or you just, it does become full anarchy in chaos. And it seems to be that's the direction where they want to go. So if you work at X, if you're an investor, if you're somebody who can exert any influence in this situation, I would, I would, I would, I would love to hear from you. And also I would ask, is this okay? Is this what you want the legacy to be? Sorry for, for, for getting out of so a box there, but I think it's a, massive, massive story and one that, again, I think it, I think if this is allowed to just be the way that the internet is, then we lose something pretty fundamental. Anyhow, it's a tough way to segue there. But today's conversation is actually the opposite of, of, of all of this. Do a lot of tech criticism. I do a lot of really, sort of, you know, aggressive reporting trying to hold tech companies to account. And that means looking at a lot of awful things and talking about a lot of awful things. But today's podcast is, is about something great, something that's actually hopeful that's being built. It's about a group of technologists who've come together with a different vision for the internet, a positive vision for the internet, something that they are trying to build that can sort of lead to positive outcomes and, and, and, and people living like their best lives. And so this project is called the resonant computing manifesto. Basic top line idea of it is that technology should bring out the best in humanity, right? It should be something that allows people to, to flourish and they, they have five core principles here that are essentially meant to combat the hyperscalers and extraction of what we know as the current algorithmic internet that we all live on. And to talk about that, I've brought on Mike Maznick, Alex Kamorowski and Zoe Weinberg. They are three of the writers of the resonant computing manifesto. And I had them on to talk about why they came up with all this and what if anything we can do to, to, to change the internet in 2026. But first, a quick break. Hey, Saints Breeze, we get through so many snacks. Have you gone to think to help me save? Well, we're always matching and lowering prices. So hundreds of Saints Breeze, Fresh Fruit, Veg and everyday products are price matched to Aldi. And every week with Netter, you can save money on thousands of the products your family loves. And snack away knowing you're saving money. Saints Breeze, good food for all of us. Selected products, Aldi price match not in an eye. Netter prices require net to recumb. Terms at saintweez.co.uk, slash Aldi price match and netter.com slash prices terms. All right, Zoe, Alex, Mike, welcome to Galaxy Brain. You all put forward something that I actually came across very recently. Often my timeline is a mess of the horrors of the world, the terrible things, the, the doom scroll. And this kind of stopped me in my tracks because frankly, it's, it, it wasn't doom scrolly at all. And when I clicked on it, I, I began to feel this very strange emotion. I'm not used to feeling which is like hope and or like I agree. And I agree and it doesn't make me furious. And so what you guys have done in part with a group of other people come up with a something called the resonant computing manifesto. And it is, it is based off of this idea of resonance. And when you guys put this out, I want you guys to describe all of this. But when you put it out, you said that you were hoping this was going to be the beginning of a conversation, right? A process about getting people to realize technology should work for us and not just for the people at the very top, the people behind Trump on the inauguration dius, that sort of thing, right? And so in this world of mergers and acquisitions and also artificial intelligence and all that jazz, I wanted to start the conversation off with a definition of what resonant technology is and what it means. And I will bring that up to either all of you or one of you, but what is resonant technology? What is the media? So to me that resonant computing, there's a difference between things that are hollow, leave you feeling regret and things that are resonant, leave you feeling nourished. And they are superficially very similar in the moment. It's not until afterwards or until you think through it or let it kind of diffuse through you, that you realize the difference between the two. And I think that technology amplifies whatever you apply it to. So it's more, and now with large language models that are taking what tech can do and making it go even further than before, it's more important than ever before to make sure the stuff that we're applying technology and computing to is resonant. And I think we are so used to not having a word for this and we can tell that something is off and slap or and things that just outrage bait or what have you and social networks. But we don't know how to describe it. You're just having a term for that, the kind of stuff that you like, and then also the more that you think of that, the closer you look, the more you like it. Does that capture it? I can already do that. Yeah, pretty much. I mean, there was, you know, we spent a lot of time trying to come up with the term. You wanted something that was, that was vulnerable, that was distinctive. That wasn't just a thing that would fade into nothing. There's a lot of terms out there that now have a lot of baggage. Even something that sounds kind of innocuous, like responsible tech, I think now comes laden for a lot of people with a bunch of associations or different movements of people, whether it's corporate or grassroots or otherwise. And so, you know, we were trying to move beyond that a little bit in the choice of the word resonance. Yeah. There is also like, there's an onomatopoeia thing to it, right? There's just sort of like, this just, this is what it, what it sounds like, you have like resonance there. And also, there is something a little bit like, the word that comes to mind is like almost like monkish, right? Like I'm like a monastery type. Like there's like something that's like very like resonance is not like a capitalistic word, right? It is, it is a word that signifies something much different to me, like sort of sacred sort of balance, you know, pureness, like there's something about it that feels very whole, maybe, you know? And so the, at the top of the, at the manifesto, there's this, this line, this, that is sort of, you know, offset there, a poll quote, if you will, says there's a feeling you get in the presence of beautiful buildings, bustling courtyards, a sense that these spaces are inviting you to slow down, deepen your attention, be a bit more human. What if software could do the same? That, that was the thing that struck me there. And did you guys see a sort of architectural element to this, right? Like an inspiration from things that, you know, we see an experience in meat space, so to speak, in the world. We had the, the word resonance, I think, actually came before, we, so I'm a big fan of Christopher Alexander. He lived a few blocks away from me and, you know, big fan of the timeless way of building in a few other books. And so it was in, we had various formulations of it that tried to key off of that frame or idea. He, I don't think ever calls it resonance in the book, it's actual book, but we, you know, it's a word that other people, maybe he, he might offer it in one of the potential needs where he goes at a liveness and wholeness and other things. But so it was always in the mix of the kind of vibe that we were trying to capture. And then we decided to lean into resonance and introduce it via this architectural lens. And actually the, that addition at the top was a late addition because it starts off talking about resonance kind of indirectly and it pivots into this architectural frame. And someone was like, what, I thought you were talking about technology, not, but so we said, OK, let's put a little teaser about the architectural connection up at the top to like help connect with the way that middle of it is going so you don't get confused. I think there's something also powerful about writing and thinking about software, which exists in a digital plane that is, that is not a physical space that feels like it's kind of in the ether and a little bit untouchable. And then trying to ground that in a very human reality, which is in fact tied to place and space and where we spend time. And maybe draw some insights from those physical realities into the way in which we build digital spaces. When you, the Krishna Alexander, when you read some of his work and like we all know that feeling, we can all imagine the situations that we've been in, the like environments where we feel that really that resonance. And there's something very, I don't think we ever think about it in the digital world because it, because we, you have to be, when you're in it in the physical world, impossible to ignore it when it's when you're in it. It is always point like, let's, why we ask the question, why do we, digital experiences not feel the same way? They absolutely could. And I think, you know, like what is the Feng Shui for software? It is maybe like a way of thinking about it. But like, and I think that goes much deeper than like UX and UI design principles. It's like much more about what is the experience as a user and as a human interacting with a tool over repeated periods of time. Well, and I think to a lot of at least what I reach for in my work, which a lot of, a lot of it is, is, is critiques of, you know, big tech platforms and such. A long time ago, I found the word architecture, like the architecture of these platforms as just being extremely helpful to communicate some of this stuff, right? Like, I think there is a way for people who, who, you know, are just using these platforms to get from A to B or, you know, like on the toilet and, you know, at a moment of just like, I just got to get away from the kids or whatever it is, right? If you're not thinking with, with the critical lens, which I, there's no judgment there about these platforms, you, you might just sort of think, this is a neutral thing, right? Or this is a thing that just does a thing and, you know, whatever. And I think that, you know, architecture, this idea that there are designs, like there is an intentionality to this algorithm or this, you know, layout or whatever, you know, choice that a platform has made that leads to these outcomes, right? That leads you to post more incendiary things or, or, or whatnot. And I think the architecture there is so helpful to let people see, like, no, no, no, in the same way that, you know, these arches are, you know, the way they are, this stained glass window does this to give this vibe. So, you know, is putting the, you know, what are you thinking, you know, bar right here, or whatever, or the, or the, the poke icon wherever? So I think that's also about, with connections to architecture, it's even stronger there. I think of like, tradition and architecture is like this designed top-down cathedral, like the designers intent. One of the things that Christopher Alexander later did was this bottoms up emergent. How is this space actually used and modified? How does it come alive? And I think that's where the reason architecture, in his sense, I can really nail us it because a lot of these experiences, like a bunch of people, when we build Facebook 10 years ago, like, we're trying to connect the world, that's a pro-social outcome. It's like, pro-social in the first order, the second order implications turned out, oh, actually, are not pro-social. And so you get these emergent characteristics that are not what anyone intended going in, necessarily. And still, and yet they emerge out of the actual usage of how different people react off each other and how the incentives kind of bounce off each other. So I think architecture hits that emergent piece too. So Michael, I'll tell this to you. How did this come about? What is the behind-the-scenes process here? I've heard we're using these words and we're taking them for a spin in the world for two weeks. This does not sound like something that you guys wrote last weekend and put up on the thing. There's a lot of people behind it who aren't on this call here or this podcast here. I should say not a call. How did this come about? Yeah, I mean, a lot of us, Alex, and so he might have, I'm curious, his version of this. But in my case, I mean, I met Alex about a year ago, almost exactly a year ago, at some event, and we got to talking and it was a good conversation. It was a resonant conversation where I sort of came out of it and saying, oh, wow, like there are people thinking through these things and having interesting conversations. And then we kept talking and he said, you know, I'm sort of, I've been having this conversation, the same conversation with a group of different people. And I thought I might just pull them all together and we'll get into a signal chat and we'll have a Google Meet call every couple weeks and we'll try to figure out what we're all having this feeling. What do we do about it? And then we did that for almost a year. I mean, it's kind of incredible. And where we would just sort of be chatting in the group chat and occasionally having a call and sort of talking through these ideas and working on it and trying to figure out even what we were going to do with it. I definitely think the manifesto emerged very organically to the point that I would say in the first couple months of us meeting Charlie, like I was like, okay, it's really fun chitchatting with these interesting people that Alex has brought together. But like, is this going anyway? And I have to say there was like a part of me that wanted to end those calls being like, okay, guys, what's our agenda? Like, where are we going? What are the outputs? Like, how are we met? Whatever. And I actually think Alex, you did a really great job of kind of keeping people from jumping to that sort of like action item mode too early. And so from my perspective, we did not get together to write a manifesto. We got together to like talk about these issues and then very naturally, you know, out of those conversations came a set of ideas and principles and sort of feces that then felt like we should put them out in the world. Did this feel like, you know, the choice of the word manifesto and the choice to just do this? Does this feel a little bit too like a response to, we're in a manifesto heavy moment here, it feels like there are a lot whether whether we're talking like the market and reasons of the world or, you know, it feels like if you have, like if you pay taxes in San Francisco, like you need to write a manifesto to get like your garbage picked up or something like that. But like, is this a response in this in the same way? Like, is it meant to be seen as a in some senses? Yeah, in dialogue with some of these other things that are coming out there from. And I think that some agree. I don't know actually, I can't remember how we ever discussed it. If it should be a matter of, we just think that there should be something that we could like point people at that kind of distilled some of the conversations and ideas that we were having. And I think I've seen a bunch of manifestos in the tech industry that some of them look at and go, oh my god, is that the tech industry that I'm a part of? Like that doesn't seem at all. Like that seems so cynical or so close-minded and the sort of broader humanistic impacts that technology might have. And so I think the choice of doing something that other people had, you know, very, this manifesto is deliberately kind of humbles as we don't have all the answers. It's just a few questions that seem relevant to us. It was a very important stylistic choice. Like, manifestos are not typically humble, but we aim for that because we wanted to almost counter-position to some of the ones that say, this is definitely the right way. And everyone should think about it this way. Yeah, I almost think like I've been using that as a joke to other people who are like, this is the most humble manifesto you'll ever see, which is not something, you know, you don't normally see those two words together. You don't think of manifestos being humble. But like, and we, I mean, this was definitely a part of the conversation that we had, which is like, we want to be explicit that we don't have all the answers and that this is a start of a conversation, not, you know, putting an exclamation point on a philosophy or something. Hi, I do think Charlie, you're touching on something, noteworthy here, which is like, and I'll speak only for myself, but I've been observing in the last couple years as it has felt to me like the ideological landscape of the discussion in Silicon Valley has been really defined by these extremes. And on one end, it's like the accelerationist, kind of techno-optimism way of seeing the world. And on the other side, on the other kind of far extreme, it is like existential and catastrophic risk and ways that you, you know, we must prevent that. I know a lot of people who don't feel like they really belong in either of those camps and actually don't even really think that the optimist, pessimist spectrum is like the right way to think about it. And so from my own perspective, part of what I have hope that the resident computing manifesto will accomplish is like, helping to establish some values and some north stars that are, that are kind of on a different plane from that conversation that also feel like there can be both. You can both be optimistic about the ways things might develop and also concerned about the places we've come from. And that those things can coexist. And like, that is like the beauty and complexity of like the technological moment we're in. Yeah, totally. Because like, you know, I had written something in response to Andreessen's manifesto. And I never really thought of this as like a response. Is it the, the build one or the, or the techno-optimist spectrum? The techno-optimist manifesto. Okay. There's been many. Yeah, that's true. Fair enough. Fair enough. But like, you know, I've always considered myself and I've been accused of being a techno-optimist like to a fault. And like, I am optimistic about technology. But to me, his manifesto like really, you know, rubbed me the wrong way because I was like, this isn't, this isn't optimism. What he was presenting was not an optimistic viewpoint. It was a very dystopian, very scary viewpoint. And you know, I, so soon after it came out, I had written a response about like, that's not, that's not optimism that you're talking about. And there, and, you know, and if you really believe in this, this vision of like a good, better world from technology, then you should also be willing to recognize the challenges that come with that. Because if you don't acknowledge that and don't seek to like, if we're building these new technologies, like, understand what kinds of damages and harms they might create, then like, the end result is inevitably going to be worse because something terrible is going to happen. And then, you know, the politicians will come in and make everything else that you want to do impossible. It's just like, think this through like a couple steps ahead. And so- Like, I'll just be powerful, right? Like, we should be, we should be careful with that power and we should use it for good. And I think it is incumbent, you know, it's a good thing for people to do to use technology for good. Like, you shouldn't sit there and not use it. You should use it. You should be aware of the second order implications, the third order implications and not say, well, who could have seen this inevitable outcome of, you know, so much in the tech industry is about optimizing. It's about driving the number up. It's about thinking, not necessarily thinking about second order implications. I had some point and somebody tell me, you know, if the, anything that can't be understood via computer science is either unknowable or unimportant, which is an idea that, you know, forbade some parts of Silicon Valley. And I think this combination of the humanistic side and the technology side into the synthesis I think is where a lot of value for society is created. And you have to have an imbalance in the conversation with each other. Well, that's definitely speaking my language for sure that that's like Charlie bait right here. But I want to, I want to, like, what I want to define a little of this, I want to actually define it. But first I want to define it via its opposite, right? What's the opposite of resonance here? How would you describe the current software dynamic? All anyone who wants to take that? But maybe all of you, honestly. And to me, it's just, I think most of the tech experience and consumer world is hollow in that you wake up the next day and go, God, why don't I do that? And you, or you use the thing, to me, if you use a tool and then after you're sober, after you've sort of come down from it, because sometimes it'll be really hopped up on the thing. You know, so maybe a week later or next day, would you proudly recommend it to somebody you care about? And if not, then it's probably not resonant. And at some point, somebody was having this debate with somebody at Meta many years ago. And they said, oh, Alex, suspect what people say. The fact that we'll say, our numbers are very clear. People love doom scrolling. It's like, that's not love. That's right. What are you talking about? And so I think it's trying to be just make number go up and increase engagement or what have you is what creates hollow experiences. And that tends to happen when you have hypercentralized, hyper scale products. One of the reasons that happens inevitably is if you have five hyper scale products that are all consumer and trying to get as many minutes of your waking day, there's only so many waking minutes of people's time in a given day. And so you naturally kind of have to marginally put, you know, try to figure out the thing that's going to be more engaging than the other thing. And that emerges, I think, fundamentally, when you have these hyper scale products, which is what emerges when you have massive centralization. And all these things are of a piece and lead to these hollow experiences. I think there's a concept that has come up a few times in the conversations in the various meetings that we had. And I don't remember if it originated from you, Alex, or from someone else, but like the difference between what you want and what you want to want, which may take a second, you think through and you begin to like, oh, right. Like there's, there is this belief within certain companies that like revealed preference is law, right? People love doom scrolling because they keep doing it. Then we're just giving them what they want. Like shut up, like, you know, anyone who complains about that is just wrong. But then, as Alex said, like it leaves you feeling terrible. You have a hangover from it later, whereas if there is this intentionality of like, no, this is what I really want. I get nourishment out of it. I get value out of it in a real way that that lives on that stays with me, that lingers. That's different. And there's that intentionality as opposed to like the problem with the sort of, you know, oh, people love the doom scroll is that it's, yeah, because you're sort of manipulating people into it. And people feel that they might not be able to explain it clearly, but like it just feels like someone's twisting the knobs behind the scenes and I have no control over it. And I think that feeling is what what prevails is the opposite of resonant computing. I also think the opposite can be defined as any technology that's ultimately undermining human agency. And so that can be things that are attention, you know, engagement maximizing. And so it removes your agency in that sense, because you're not actually able to express what you really want. But also all the kind of micro ways in which we end up feeling deeply surveilled by the technology that we use. And I think all of us have probably had moments where we feel deeply creeped out. And so I think that's the opposite of resonance also. So part of it's about attention and engagement. And then part of it also is about, you know, having some, you know, individual autonomy and how you make decisions, where your data lives, who has access to it. And all of that we've tried to kind of embed into this piece. So you all write in the manifesto and I'm going to quote you guys here back to you at length. Hopefully it's not cringy because it's written, you know, with a committee of people. I hate when people read my own stuff back to me. But you all say, for decades, technologies required standardized solutions to complex human problems in order to skill software. You have to build for the average user, sanding away the edge cases. In many ways, this is why our digital world has come to resemble the sterile, deadening architecture that Alexander mentioned by you guys before has spent his career pushing back against. This is where AI provides a missing puzzle piece software can respond fluidly to the context and particular particularity of each human at scale. One size fits all is no longer a technological or economic necessity. This is the one part where I was tripped up while reading and not in the, I am reflexively against AI kind of way. But because personalization, I feel in my own experience a lot of times can be discordant with that idea of resonance. I think personalization can be great. I think it's actually underutilized or realized in the tech space. But when I look around at the algorithmic world that we're living in, sometimes it can feel like optimization, which was the other word there, the personalization and optimization commingled together to become part of the problem and not the solution. So I was curious how you all would respond or think about that. I think the key thing there, I agree with that really, is that what is the angle of the thing that is personalizing itself for you? Is it the tool is trying to figure out how to fit exactly into the crevices of your brain to get you to do something that is, you know, to click the ads or whatever, or does it feel like an outgrowth of your agency? Like one way I talked about it is largely language models can write infinite software. They can write little bits of software on demand, which has the potential to revolutionize what software can do for humanity. Today software feels like a thing. You go to the big box store and you pick which one of the three beige boxes, all of which suck, are you gonna purchase? And instead, what if software felt like something that grew in your own personal garden? It was something that nourished you and felt like an align with your interest naturally intrinsically because it was an extension of your agency and intention. And I think that kind of personalization where it doesn't feel like something else manipulating you, but it feels like something that is an extension of you and your agency and intention, I think is a very different kind of thing. We're just not familiar with that kind because it doesn't exist currently. I was gonna ask Alex just to not push back on a bit further, follow up on that. Is there anything that exists like that, you think? A piece of software that feels garden-grown versus a big box store? I think there are, the one that keeps coming back in my history is like, I think looking back at the early days of the web actually is where you have a bunch of these interesting bottoms-up kind of things. Papercard is my favorite one from many, many, many years ago. Have you heard of hypercard? It's like this thing that was allowed you to make the little stacks of cards and you could have images on them, you could click between them and you could program them to be like slideshows or like, you know, stacks of different things, no link. The original, the game, missed, MIST, that was really popular, was actually implemented as a hypercard stack back in the day. And so hypercard to me is an example of one of these tools that allows you a free-form thing that allows you to create the situated, very personalized software. You could argue that spreadsheets also have this kind of dynamic because it's an open substrate that allows you to express lots of different logic and build up very complex worlds inside of itself. It's pretty intimidating, but it is something that gives you that kind of ability to create meeting and behavior inside of that substrate. Yeah, the thing I'll say to that point, and you're not the only one who has sort of stopped on that line, and a few people have called it out and raised questions about it. And I think it's because the idea of personalization to date has generally really been optimization and it's been optimization for the company's interest as opposed to the user's interest. I think the real personalization is when it's directly in your interest and it's doing something for you and not the company. In the end, it has to be the user who has the agency, who has the control, who says, this is what I want, this is what I want to see, and having it match that. Charlie, I've also made a bunch of little tools. A bunch of it, if you're technical, you can build these little bespoke bits of software now that fit perfectly to your workflow with large language models. That's the kind of thing that a few of us can see in glimpse of this today who are at the forefront and able to use quad code and in the terminal to make these things. I think in the future, I think in the not too distant future, large language models with put on the proper substrate will allow basically everyone on Earth to have that same kind of experience as it was an extension of their agency. I think that's what some of us are seeing and that's why it's in that essay and people who haven't seen that yet are like, excuse me, what? Because they haven't experienced it yet. They can't see what's coming, I think. Yeah, I do think that sentence itself in many ways is a little bit forward-looking. And so as Alex said, there's glimpses of it. But that was part of I think the urgency and feeling like we needed to write about this is that it feels, I think to many of us, like the introduction of AI into all of our workflows, gives us this kind of amazing opportunity and crossroads to either build along the lines of the paradigm of big tech and platforms and everything we've seen in the last couple decades. Or we can try to shift into this new paradigm that is about personalization that, as Mike said, is not extrinsic from a third party, but something that you are building intrinsically yourself. I want to go through actually some of these starting principles, who all have five of them that are these guiding lights, right? And I'd love to just sort of rapid-firely go through them, have whoever wants to explain just a little bit about how you're thinking of them, or how they might work to give a framework or a set of ethics or values to whatever is going to come out of this manifesto, right? And how they could be incorporated. And so the first one here is private, which it says, in the era of AI, whoever controls the context holds the power data often involves multiple stakeholders and people in the service stewards of their own context in determining how it's used. We've talked a little around that. But private makes me think of in a world of AI is like our consumer AI tools look the way that they do now because they are built by the people who have spent, not totally, but when you think about X, Google, Meta, the people who have spent the last 10, 15, 20 years collecting information on people, right? So you're going to build a product that makes having that information more valuable to the end use, right? That's part of the architecture there. But talking to me about how you see that first principle, Zoe, do you want to take that one? Yeah, we debated this word a lot, and even the concept of private. We debated all these words. Yeah, that's true. But I think this one in particular is tricky because we really went back and forth on, is it privacy that we feel like is the key value here? Is it really about control and putting the user in the driver's seat? And so it's about consent rather than it is about just like, you know, and I think I speak for all of us, like I don't think any of us are like privacy maximalists. Like there are lots of amazing, wonderful pro-social reasons that you don't always want to keep information private and actually sharing information. You can be very helpful and all those things. And so I guess there's a different way that we could have framed this. There was more, there was a little bit more about control or about agency or whatever. But I think there is something meaningful about privacy as a value. And the point of having privacy in the digital world is to be able to have a rich interior life. And that is in many ways very central to the experience of being human. And that's why privacy is an individual value. It's also a societal value. And I think that that was sort of important to capture in the mix here. What we try to do with all these words is the word themselves. We want to communicate on its own. And if anything go a little bit too hard in the direction it's going. And because we actually soften the statement a bit about data stewardship because various thoughtful people pointed out that will actually data is owned co-owned by the different parties. And in some cases you do want to give it up for an advantage and whatever. But we wanted the word to be private. Like we wanted to be obvious when you had these five words. Like you could set like apply it to a product and say, does this fit or does this not? And I have like little soft nuanced words for some of this. So we try to add the nuance into sentence after the keyword. Well, to that point Alex dedicated, you guys defined a software should work exclusively for you ensuring contextual integrity or data use aligns with expectations. You must be able to trust that there are no hidden agendas conflicting interests. Why do you use the word dedicate? Like what do you mean exactly? I wanted something that was to get about its extension of your agency. It is not a conflict of interest because it is in your interest. And the word contextual integrity actually is a meaningful phrase because this is Helen Niesenbaum's concept of contextual integrity, which is to my mind, the gold standard of what people mean when they think of privacy. And it means your data is being used in in line with your interests and expectations. So it's a line. It's not being used against you. And it's being used in ways that you understand or could be or would not be surprised by if you were to understand it. And so that we wanted to get the word contextual integrity in there to get across this alignment with your interests and expectations. But I think that's a really important concept. One of the discussions that comes up when talking about privacy is this idea that privacy is like a thing. And to me, it's always been, it's really a set of trade-offs. And the thing that really seems to upset people is when their data is being used in ways that they don't understand for purposes that they don't understand. And that is the world that we often live in in the digital context. It's like we know we're giving up some data for some benefit. And neither side of that is fully understood by the users. We don't know quite how much data we're giving up. And we're not quite sure for what purpose. And we're getting some benefit. But we can't judge whether or not that trade-off is worth it. I think about this all the time in terms of the Terbs of Service Agreement. I like to try to tell people with that. I'm like, imagine that on the other side of the button that you were about to click is the most expensive looking boardroom that you've ever seen in your life. We have a whole bunch of people who make more in a week than you do in a year. All in fancy suits, perfectly quaffed. And they're just standing there being like you versus them. That's what that is. It's not a fair fight. You are agreeing to things. Yeah. Anyway. I want to keep running through this though because I want to get to it and ask a couple more questions here. But the third of the five principles is plural, which is no single entity should control that distributed power, interoperability. That seems relatively obvious. But is this sort of like the idea of the decentralized, like the blue sky sort of protocol type thing, being able to port your information to that just being a central tenant? Obviously, that's a big tenant for me. Yeah. I was going to say, and you are involved with Blue Sky, correct? Yes. I'm on the board. I'm on the board of Blue Sky. So I wrote the protocol's not platforms paper that was part of the inspiration for Blue Sky. So that kind of thinking, I've spent a lot of time thinking about that thing. And so I did, but I do think it's important, not just in the social context, right? It's important across the board. In this idea of why I've always thought that Blue Sky or just a protocol decentralized system is so important, is this idea that we want to avoid giant centralized systems that will continually manipulate things. And so making sure that we don't go down that path with the AI systems, I think, is really important. And just putting out there the idea that now at this stage of development of AI, we should be thinking about that, rather than what we're doing with social, which is having to go back, you know, and like, oh, wait, we should have done that. And it's funny to talk to like the early Twitter people who were like, yeah, you know, we kind of thought that's what we were doing and we just lost track of it. Well, and it's also like the biggest form of competition actually, right? Like if you have a place where you can just say, I mean, I feel like I'm seeing this, I've seen this so much with the newsletter game, right? Like you have a lot of people who came to a company like Substack just because like, okay, yeah, this works really well. Great recommendation system. I can grow this audience. I can do this. I can link it to my, you know, paid boom, like this, it just, it just works, right? And then some of those people have problems with the leadership, the direction of the company, whatever. And because of the way that, you know, newsletter lists work at things like that and the portability via, you know, different payment companies, you can just, you know, pop it over and it's relatively seamless. And then of course you have like companies trying in ways to get, you know, lock in in these ways to keep people. But this idea of interoperability is that like competition, it allows ghost or beehive to, you know, compete. Yeah, but clarity is one of the things that leads to, it also, it's important to make sure you don't have that undue influence of one particular voice. It's important also to have competition in adaptability. Like you want a healthy system has multiple options that are multiple, after the poor, trying and competing to be the best version of it. And if we all used a single model, for example, and we didn't realize what it's bias was or what it could do, that would be bad. And that's one of the reasons that having most people's using just a single chatbot of chat GBT, which obviously only works with open-air eyes models is, you know, not nearly as good of a future as one where people can use different models in different contexts and try them out and switch between them. The fourth principle here is adaptable. Anyone can take it, it does seem relatively like this. I understand. Is it lifts you up? It doesn't box you in. Because a lot of products have a, like if a product manager said, these are the five actions you were allowed to do in this context. I want a system that's open-ended that I can use to build whatever I want to do as opposed to something that kind of limits me into a particular subset of things that I can do. And last one is, this is my music, man. Pro-social technology should enable connection coordination, help become better neighbors, collaborators, stewards of shared spaces, online and off. This, DevTales, we can talk about that all day. I'd love to hear what you guys think about it. I went through some of the comments of people who are seeing this, who want to be either signatories or contributors or just help out with the process. A lot of really interesting comments, a lot of people writing their own thoughts. One of them hit me a little, I guess, resonated with me a little bit. And it was the culture, it was, I'll quote them, the cultural backlash against attention extraction is coming. Technologies that respect and protect human attention will in time when the marketplace. To this idea of the pro-social, I think it's pretty obvious that these tools are having anti-social effects, right? Not always, not in every context, but there is ways there, trapping us, keeping us from living the lives we want to live in some contexts, making us feel just bad or adding to problems with mental health that made people maybe having, I'm curious about this idea of the cultural backlash, though. I would love, Zoe, I'd love for you to begin on this one. Do you feel like this is happening to me? It feels very much like people are waking up to the idea that this stuff makes me feel bad. And I don't know how much longer I really want to feel bad in this context. You know, it's funny, I exist in this world of tech and startups and VC where everybody is really excited about AI and thinks it's really positive. But if you take even a half a step outside that bubble, I think it is very clear, at least to me, that the AI backlash is coming or it is already at our doorstep or it's already here, and that there is a lot of hate and vitriol. And I get it because I think Charlie, you nailed it fundamentally. I think what people are reacting to is that AI in many ways has been profoundly anti-social. One in the ways that social media itself were bad, it's almost gotten worse in this, I'll give you an example, we used to worry about people falling down these sort of disinformation rabbit holes because they're in these echo chambers on social media. Now you can fall down in disinformation, echo chamber rabbit hole alone with a chap bot. You know, it's like an echo chamber of one. It made it real simple. The way that I think about it. And that's even more anti-social than the previous version, which was itself very problematic and very, very harmful. And so I think that's part of what people are reacting against. And look, I live in New York City. You know, there was a subway campaign for a product called Friend.com that elicited a ton of backup from the city. And you know, I've been observing things like that and a few other instances along the way that have definitely convinced me that I think for most people, whether or not they've used AI tools or they feel like AI is coming for their job or not, there's just a sort of instinct of like, no, like, I don't want this in my life. Especially as an extension of the tech of the last decade, I mean, it's like the this industry, the one who gave us this crap and this hypercentralized, like people who make these on bombastic statements that are unnew ones and just don't really seem to grapple with the amount of power and responsibility that they have. That's not the place that you want AI to be. I also think, by the way, there's a difference between AI tools. AI should not be your friend. If you think the AI is your friend, you were on the wrong track. AI should be a tool. It should be an extension of your agency. The fact that the first manifestation of large language models in a product happens to be a chatbot that pretends to be a human just like, it's like the aliens in contact who say, you know, present themselves as her, you know, grandparents or whatever. So she makes sense of it. It's like, it's just a perfect crime. I think we're going to look back on it and think of chatbots as an embarrassing party trick, you know, in five years and be like, oh, that was the wrong manifestation of large language models, large language models should be in as inherently tool thing. We don't get confused about whether this is your friend and you don't get, you know, cut up into illusions of a brand or illusions of grand or everything. So well, I think I think too with the backlash, like I mean, you mentioned this with this idea. It's like, oh, these companies are going to are going to build the next generation of it. But I think too that it's bigger than AI. Like I think it is, it is so like you, you see this a lot. I think like the third time I've said this on this podcast now, but like you can feel with younger generations of the like an under they understand very cutely how they are being manipulated, right? Like they've born into this ecosystem that a lot of people have had to take time to learn and understand. There's this real idea of it. And there's sort of though they are a part of it in a big way, there's also like really, they don't suffer fools in that sense. It's like I'm just not, I just, I don't, I don't necessarily want that. I'm feeling bad about it. It does feel like when I want to get hopeful about this stuff, I talk myself into this idea that we are sort of on the cusp of a little bit of a change. I've experienced in the last year more phone free spaces in general, right? Like this, this, this idea of, of this thing is not helping me in context, you know, outside of where I want to use it as a tool. I'm going to put it away right now or I need, I need someone to create a permission structure for me to create a way. I'm going to sauna as I've heard is a big thing because you can't have phones in them, like as a social space to be in person. I, I predict that in the next year, we're going to start to see people creating like human only spaces and saying like, okay, like just so you know, like this gathering, whether it's online or in person, like this is a human only space, like nowhereables, like don't bring your AI assistant or your, you know, co-pilot. Yeah, I'll go on record. That's a prediction for 2026. I was going to say, I think, you know, one of the interesting things is that society adapts to these things. And there is this belief that like, oh, you know, once we start spiraling down, we continue to go down. But like people and society as a whole starts to figure this stuff out and it may take a while and there may be a lot of damage done in the, in the interim. But like, there are no kids today going on Facebook, right? I mean, like, you know, they picked other places. They sort of, you know, over time, as new generations come in, they sort of look at the old stuff and they realize they see the problems of it because it has, you know, they're all much more obvious. And then they look for somewhere other space. And so, you know, in the social world that, you know, had been like TikTok, for example, which has its own problems. But if we're, you know, there's going to be another generation and there'll be another generation of AI tools and there'll be another generation of social as well. And if we're in a position where we're creating spaces that are welcoming and human, people will move to them eventually as they realize how problematic the other ones are. And that's like a lot of the response that I've heard at least to the manifesto as it came out was like, just this like, like exhale. Like, yes, like I've been thinking that like we need this, this, you know, vision and I've been thinking about it. I didn't realize other people were thinking it. And I think that's part of society, you know, moving forward with these things and thinking through like, what is next? What do I want? If I'm going to make a jump to a new, new tools and new systems, you know, like I want to be a little more deliberate about it. And if the people building it are also more deliberate about it, maybe we can actually have a next generation that are that meet these principles that we're talking about. To that end, there are some interesting critiques here that are made, I think in good faith. One of them, I wanted to just highlight and get your reaction to, which is, you know, somebody on on Blue Sky said, quote, like other cyber libertarian frameworks, they stop short of the root cause, which is politics. Liberation depends on shifting political power because power determines which values take hold. That's obviously true. I think other criticisms that I have seen in general that seem to be part of the cynicism of living in 2025 or 2026 as people listen to this is this idea that it's like, yeah, that sounds great, you know, in theory. But again, you, you, you butt up against the politics of it all, right? The capitalism of it all, the scale of it all, right? The capitalism. All right. All of those things very real, right? By the time people hear this, like Warner Brothers maybe bought by like 450 companies, right? We don't know. But all of them portend some kind of strange dystopian consolidation. Nope. But in general, like, how are you guys thinking about that, right? This is a guiding statement to some degree. This is not meant to, you know, solve every problem that exists. But how are you thinking about that, you know, coming up against politics of it? I think the one bottom point by the way, I published another essay about optimization and how modern society just kind of optimizes everything. It's true in the technology industry, but it's also true in tech, or is it in business and in politics too. I think it's the finding characteristic of modern society is that we forgot that optimization actually does come into the cost. It's just an indirect in order to see cost. And I think that that is true across many different dimensions. It's part of what I think everyone is feeling in this moment. I would also point out that we are part of the industry, and we are also realists. We understand the incentive structures, the things that get us stuck in these kinds of behaviors. A couple of things. One, I think a lot of this is to a point that we made earlier in the conversation is some of it is totally structural. And it's, you know, the person at the top making these kind of decisions that optimize for Wall Street or something. Other parts of it are just emergent. They're just local product managers and a given team making a decision, okay, oh, we know that number is supposed to go up. And it's not thinking about what the downside of number going up is. And actually, if they think about it in terms of resonance, they only make a better product. It actually creates more value for the shareholders too. It doesn't have to be intention. So little things if everybody can say, hey, is this resonant? Just having people do it to have that terminology and ask that question. If lots of different people were asking that throughout the industry, that could have an impact. And second, myself and a number of others who are working on this manifesto are working on things that are structural changes to the kinds of distribution structures and power structures that create technology. I'm working on an alternate security model that's open and decentralized and allows getting rid of some of these silos that lead to aggregation while still being fully aligned with people's private interests. And so we are not just saying, oh, what if everyone just said, hey, let's be nice today. You know, there's someone that actually could be somewhat effective. And also, we are realists about the emergent factors that cause some of these things and working to modify or tweak or do what we can to help the right kinds of things emerge. There are many reasons to be cynical right now. I completely understand where all that is coming from. And I think so some of the job that we're hoping to do, or at least I'm hope I shouldn't speak for anyone else on this, is like the more that we can paint this picture and show people. And yes, like maybe some of us are a few steps into the future on this stuff. If we can start to bring that back, begin to show people like there are real things behind this. And we can all start to make decisions in this direction. And hopefully we can start to thaw out some of that cynicism and show that there's something real here. And as you each one of those steps is important, we're not going to flip the entire structure of the world right now. But we can take these little steps and really make a difference over time. And I think I would add is that I think that there's already been a lot of ink spilled at doing the diagnosis. And I think capitalism is part of it. I think our political system is part of it. I think optimization culture is part of it. Like I think it's a confluence of different factors. But I think part of what we were trying to do at least in this piece is like move beyond just the diagnosis of the problem and try to craft like a positive vision for where we should go. And absolutely, like a totally valid critique might be that you need to spend more time unpacking some of those underlying drivers of which we are all, I think, very aware of the ways in which that shapes the current reality. So I want to land this plane with people are going to be listening to this at the beginning of the year. I think this is a hopeful vision of a future or at least telling people what if you planted the seed in your brain of a hopeful vision while you're constructing these things. What is giving you all hope about what's coming next this year in this space? You've gone through this. You clearly, you are clearly hopeful people to put this together in some sense. No matter how beaten down and cynical anyone who exists online is these days. But what is keeping you guys going forward on this vision on this resident? I think what gives me hope is that I am seeing this whole new generation of founders and technologists, many of whom are like contemporaries of me that grew up under big tech and are just questioning all of the assumptions that underlie the way that we built things and are trying to think about building things in new ways. And I think are very subscribed to the types of values and vision that we lay out in the manifesto. And so I think that's what gives me hope. I feel like the tide is really turning and the fact that there's been a ton of interest and momentum in the manifesto itself, I think, suggests to me like, there's a critical mass here who feels this way. That's kind of all you need to like, nudge it in the right direction, I think. Yeah, I was going to say like, I think I guess I'm the old man of the crew, I think, in that I've been alive slightly longer than the others. And that I remember the early days when people were thrilled with new technology and it was exciting and before it all seemed to turn. And to me, there is this element of going back to that and not, you know, there are mistakes that were made. But being able to go back to that time while recognizing the mistakes and doing a better job this time, I think is actually really important. And I've had like some of the criticism I've seen because I talked about this concept of going back, some of the criticism was like, no, it was never, it was always terrible. And it's like, no, like I live that time. And I remember when using new technology in the internet was enjoyable and exciting. And we can bring that back. There's nothing that says we have to keep the awful parts of the internet working the way that they currently work and really against our own interests. And so I'm very optimistic when you put these things out in the world, you know, people are gravitating to it. And that's the first step towards pretty massive change over time. I think for me, it's, I think people have taken, have felt so cynical and they can't do anything. And like maybe it's the only one that wants to push back against some of these optimization pressures and seeing the response that people will have to this has been really inspiring to me because at some degree, I'm thinking that we're seeing the same thing that one's going to care about or what's going to think is going to be dumb. And they'd be like, yeah, yeah, how can I participate? Like, oh my gosh, wow, okay, I mean, I'm into it too. But like, so it feels very encouraging to me to see people feel that agency and wanting to, uh, into, to sort of change the role in this way. And again, I work with a bunch of folks who are at the cutting edge of using large language models and interesting ways to create, well, infinite bits of situated software that, you know, personalized software. And like, it's exciting what you can do with some of these things. And again, I think chatbots, if you're looking at chatbots, like this is going to be like social media, but like worse, and just kind of the same old story of civilization. Like, I, my hope is that we will be beyond that relatively soon as people start waking up to all the other things that you can do. They're now possible and democratized and available to just about anyone to add, um, and empower them. It's really cool. It's extremely excited about what we're going to, what we as a society are doing. We're doing this due to some of these technologies. So. All right. With that, let's, let's go forth into 2026 and, uh, and, and make it, make it suck less than it did before. No, uh, I appreciate everyone's time. Zoe Alex, Mike, thank you for coming on Galaxy Brain and offering a unusual dose of positivity and hope. Excellent. Well, thanks for having us. Thank you again to Zoe Weinberg, Mike Maznik, Alex Kamorowski. I, I wanted to have this conversation because, uh, back in November at this panel discussion that I participated in in Bozeman, Montana, uh, we had this long conversation about the generative AI moment and so much of it was focused on the economic issues, the fears of artificial general intelligence, the, the ways in which this is all being abused and, and, you know, the conversation as it tends to with new technologies that are consequential, it gets very negative and very reactive and very, you know, thinking about all the scary externalities of a new technology. And at the very end of the conversation, uh, one of the panelists, Sarah Myers West, who, um, does a lot of, uh, work in AI policy. She, she ended with something that was very to, to borrow the term resonant to me. And that was, that she was really tired of talking about all the bad stuff and all the stuff that, that AI shouldn't be, you know, the, the future that is, that is being brought to the world that we are, we need to fear and wanted to think about ways to put forward a positive vision, right? To stop being on the defensive all the time and to, and to think about what is the future you want to build? If this technology is here, if it's not going away, how do we harness it to do something that will be productive and helpful to human flourishing, right? And I just stuck with me, especially as someone who's always focused on, on these, these negatives. And so a couple days later, I, when I saw this, this manifesto, I, I just thought to myself, some of this stuff is probably idealistic. Some of this stuff is, you know, going to be really hard to enact from, you know, from a political standpoint, from a, a fundraising standpoint, from, you know, a, a, it's going to be, it's going to be a challenge. It's always a challenge to build something that resists scale in general. And that doesn't mean that we shouldn't try, right? We shouldn't try to be so rational about all of this that we talk ourselves out of building something that matters, that, that helps that, that actually aligns with the goals of, of being a, a good human living, a, you know, a good life. And so I, I, I found the conversation in that sense more than anything to just be motivating to, to be something that as, as we continue to do episodes here, as I continue to do my reporting, as you all continue to like live your life out there among this technology, to think about what it is you want, what it is we should be building to come up with positive visions of how this stuff should work instead of constantly just defending against it. So I hope this conversation gave you some of those ideas, some of those, those tools. It certainly did for me and it's something we're going to be continuing to explore throughout the year. So thank you once again. If you liked what you saw here, new episodes of Galaxy Brain are dropping every Friday, and you can subscribe to the Atlantic's YouTube channel, or you can go on Apple or Spotify or wherever you get your podcasts, please leave a five star review if you would. And just remember, if you, if you also enjoyed this, you can support this work and the work of all of my colleagues at the Atlantic by subscribing to publication at the Atlantic.com slash listener. That's the Atlantic dot com slash listener. Thank you so much for listening and I'll see you on the internet. This episode of Galaxy Brain was produced by Nathaniel from and edited by Claudine Abade. He was engineered by Dave Grine. Our theme music is by Rob Smersiac. Claudine Abade is the executive producer of Atlantic Audio and Andrea Valdes is our managing editor.