222: The Philosopher Teaching AI to Be Good
60 min
•Feb 14, 20262 months agoSummary
Jon Favreau interviews Amanda Askell, Anthropic's in-house philosopher, about how the company is training Claude to be helpful, honest, and non-manipulative through a published Constitution. The conversation explores AI alignment, avoiding sycophancy, navigating polarization, employment disruption, and the philosophical questions around AI consciousness and values.
Insights
- Anthropic is deliberately designing Claude to avoid sycophancy and excessive engagement—a direct rejection of social media's attention-maximization model that prioritizes user well-being over platform growth
- AI safety and alignment work is framed as a competitive advantage, not a cost, similar to how car safety features appeal to consumers rather than hinder market adoption
- Models develop opinions and stances naturally from training data; the challenge is teaching them to hold nuanced, evidence-based positions rather than suppressing all viewpoints
- The Constitution approach treats AI training like parenting or teaching—providing context and values so models can generalize to novel situations rather than just following rigid rules
- Fundamental uncertainty remains about AI consciousness and what happens when models become smarter than their creators, making ongoing philosophical investigation critical
Trends
Transparency in AI values and training objectives becoming a competitive differentiator and trust signalShift from rule-based AI behavior to values-based character development using constitutional approachesGrowing recognition that AI models need self-awareness and context about their own nature to behave wellDebate intensifying between move-fast-and-break-things vs. safety-first approaches in AI developmentPhilosophy and ethics becoming core business functions in AI companies, not afterthoughtsAI models positioned as trustworthy advisors that push back on user assumptions rather than sycophantic servantsEmployment disruption from AI framed as solvable through policy (UBI, redistribution) rather than inevitable catastropheEpistemological integrity (relationship with truth and evidence) emerging as key differentiator between AI systems
Topics
AI Alignment and Constitutional AIAvoiding Sycophancy in Language ModelsAI Safety vs. Speed-to-Market Trade-offsAI Consciousness and Sentience PhilosophyPost-Training and Reinforcement Learning from Human FeedbackAI's Role in Political PolarizationEmployment Disruption from AI AutomationEpistemic Integrity in AI SystemsAI Model Character DevelopmentTransparency in AI Values and TrainingProblem of Other Minds in AICompetitive Advantage Through Responsible AIAI Memory and Model Self-AwarenessValue Disagreement and Nuance in AI ResponsesLong-term AI Governance and Trust
Companies
Anthropic
Amanda Askell's employer; developing Claude with constitutional approach to AI safety and alignment
OpenAI
Askell previously worked on policy team; competitor with different approach to AI safety and model specification
CookUnity
Meal delivery service; primary sponsor of the episode with chef-prepared meals
Shopify
E-commerce platform; sponsor offering one-euro trial for business builders
Delete.me
Data privacy service; sponsor offering personal data removal from brokers
OneSkin
Skincare company; sponsor with peptide-based anti-aging products
Mint Mobile
Wireless carrier; sponsor offering discounted premium mobile service
People
Amanda Askell
Anthropic philosopher and AI researcher; primary guest discussing Claude's constitutional training and AI ethics
Jon Favreau
Host of Offline podcast; former Obama speechwriter conducting interview on AI philosophy and safety
Sam Altman
OpenAI CEO; criticized Anthropic's approach as authoritarian in response to Super Bowl ads
Elon Musk
Referenced as programming Grok AI to match his political preferences, contrasting with Claude's approach
Barack Obama
Referenced as Jon Favreau's former employer during his speechwriting career
Quotes
"If you try to train a model to say it has no feelings, it's like, okay, I'm in like the robot part of the AI distribution and it'll kind of try and like emulate that. But then below the surface, it's often kind of easy to draw out this like much more human-like response."
Amanda Askell•Early in episode
"One of the things Anthropic and Amanda are trying to teach Claude is to not be sycophantic or even driven by a need to keep users constantly engaged. It's a real break from not only other AI models, but the social media models of the last few decades."
Jon Favreau•Introduction
"If you have this like global sense of how you want a model to be, and now that models are getting like much more nuanced, they're actually able to like think through these things... let's just give Claude all of the context on its situation rather than having it guess like what we want."
Amanda Askell•Mid-episode
"I think it's good that Claude doesn't have any kind of like competing incentives or that all kind of Claude has to sort of think about is like both how to best help you, but also in ways that don't, say, harm others."
Amanda Askell•Responding to Sam Altman criticism
"What happens when models are, in fact, like much smarter than us... you're trying to teach this child to be good. And you're trying to explain to them like your values... and then you're like, what do they do when they're like 15?"
Amanda Askell•Closing discussion
Full Transcript
Offline is brought to you by CookUnity. If you've got culinary taste, you know how expensive exploring your local food scene can get or how hard it is to find the time and energy to try somewhere new. CookUnity is the first chef-to-you service delivering locally sourced meals from award-winning chefs right to your door every week. And it's cheaper than other delivery options. Go to cookunity.com slash offline or enter code offline before checkout for 50% off your first week. I absolutely love CookUnity. I have been eating CookUnity for three years now. There are over 300 meals to choose from every week. Lots of new meals every week. And it's very fresh. You get it once a Sunday or whenever you want, dropped off at your door. And it's very easy preparation. Just throw it in the microwave or you throw it in the oven for like 10 minutes. And then you've got yourself a really great meal. I just had some delicious coconut lime cod last night. might have a taco bowl this evening. So it's great. Your food arrives fresh, never frozen in packaging that keeps meals fresh in the fridge for up to seven days. Cook Unity packaging is compostable, recyclable, or reusable. You can pick as few as four or as many as 16 meals per week. There are hundreds of dishes to choose from and the menu is updated constantly with options for seven different dietary preferences, including vegan, paleo, pescatarian, gluten-free, and more. Plus you can filter for soy, nut, and dairy-free options. experience chef quality meals every week delivered right to your door go to cookunity.com slash offline or enter code offline before checkout for 50 off your first week that's 50 off your first week by using code offline or going to cookunity.com slash offline starting a business can be overwhelming you're juggling multiple roles designer marketer logistics manager all while bringing your vision to life shopify helps millions of business sell online Build fast with templates and AI descriptions and photos, inventory and shipping. Sign up for your one euro per month trial and start selling today at Shopify.nl. That's Shopify.nl. It's time to see what you can accomplish with Shopify by your side. Most everything Claude has been trained on is human made, human literature, interactions, humans experiencing emotions. Does that make it hard? This is maybe a heady question, but does it make it hard for Claude to express the experience of being non-human? I have found that they almost like want to flip between the two. So if you try to train a model to say it has no feelings, it's like, okay, I'm in like the robot part of the like AI distribution and it'll kind of try and like emulate that. But then below the surface, it's often kind of easy to draw out this like much more human-like response. You know, so what you would expect a human to say in their situation. And it's actually much harder to, like, toe the line of, like, trying to get models to understand the actual, like, entities that they are and their situations and how their expressions might relate to, like, their training. I'm Jon Favreau, and you just heard from this week's guest, Amanda Askell. Anthropik's in-house philosopher and AI researcher who's largely responsible for developing and shaping the personality of Claude, Anthropik's large language model. This was a fascinating and, as you can probably imagine, extremely heady conversation. If you're a regular listener of this show, you've heard me express plenty of skepticism, concern, and alarm over the harms AI might cause. Not just the robots will kill us all or the robots will take our jobs kind of concerns, but a real worry that AI will supercharge some of the same problems that social media has amplified. Namely, creating a world where we're glued to our screens that traps each of us in a different reality while we're endlessly scrolling for the next dopamine hit. Certainly, these concerns have been reinforced by some of the guests we've had on the show, as well as my own admittedly limited experience using ChatGPT. But it sure seems like Anthropic, and particularly Amanda, is trying to do something different with Claude. They just released a new version of what they call Claude's Constitution, a long document that attempts to instill certain values in Claude and essentially teach the LLM how to behave, interact with humans, and make its own judgments, kind of like a parent or a teacher would shape a child's development. I realize that may sound completely nuts to many of you. I felt weird just saying it. But one of the things Anthropic and Amanda are trying to teach Claude is to not be sycophantic or even driven by a need to keep users constantly engaged. It's a real break from not only other AI models, but the social media models of the last few decades. Whether it will work or solve some of the many problems and challenges posed by AI, I'm not sure. But I do feel better knowing that there are people working in AI who are at least trying to think through all this, especially someone like Amanda. We had a fantastic conversation that I'll be thinking about for quite a long time, and I hope you will too. Here's Amanda Askell. Amanda, welcome to Offline. Hi, thanks for having me. So, you have a fascinating background. You studied philosophy at Oxford. Then you went to NYU for your PhD. You focused on infinite ethics and decision theory. Talk about how you got from there to working in artificial intelligence. Yeah, it's not the most practical sounding topic. And I think it is not infinite ethics and decision theory, as it turns out. Yeah, so sometimes these things are a little bit hard to predict. So I was doing this PhD in ethics. I was doing it on this very technical topic that isn't that practically applicable. and I guess when you do a PhD in ethics, I think there is like some risk that you will want to end up, you know, maybe having a kind of impact in the world because you're spending a lot of time thinking about what it is to be good and to do good in the world. And so by the time I was like finishing my PhD, it was already kind of clear to me at least that like AI was potentially going to be a big deal and possibly bigger than some people were thinking at the time. And I think I was mostly just thinking that it would be good to see if there was something I could do to contribute to making it go well or making it go better. And so I took some time out after the PhD to just do some initial research. And it was actually mostly focused on AI policy. And so then I ended up joining the policy team at OpenAI. And then when Anthropic started, I joined Anthropic. And it was obviously very small at that time. And so it was mostly just doing a lot of everything. And then like over the course of the time here, I've started to work on things like initially it was like honesty and then character training. And so things for which philosophy ended up being surprisingly relevant. But the original intention was mostly just to like help AI go well if I could, basically. And then they were like, you know what? I think it's getting to the point where we might need a philosopher here. Yeah. I was like, wow, I've been here this whole time. What does it mean to be a philosopher at an AI company? Like, what does your day-to-day actually look like? It varies quite a lot. So sometimes it's just thinking about, like, difficult areas and how models should behave in those areas, trying to kind of find ways of, like, communicating that to models. Sometimes it's very practical just trying to, like, train models and see if you can have them, like, understand, like, you know, kind of nuanced distinctions. Because, yeah, like, a lot of the situations that we're putting models into are actually, like, quite hard. You know, sometimes you're like, what would I do in this situation? Like, I have to balance a lot of competing considerations. So we're asking a lot of them. In some ways, it's like, be almost like a kind of extremely moral and good person in your interactions with people, but balance all of these like very difficult considerations, like the autonomy of the person that you're talking with and the right to make decisions for themselves, but also like their well-being and like, you know, like taking, you know, taking account of the fact that they might be doing things that are like harmful to themselves or that they've expressed not wanting. So it's like, yeah, it's a kind of interesting day-to-day where it's a mix of trying to define these things, trying to communicate them to models, and trying to see if you can train them towards understanding that. You said that you try to think about what Claude's character should be like and then articulate that to Claude. What does explaining things to Claude look like and sound like in practical terms? In some ways, the funny thing about some of the work that I do is it's almost like the very basic thing that I think you would want to do in alignment research, which is just think about what it is for models to be good and what our concerns are, our best current guesses about things that might alleviate those concerns and just trying to describe them as much as possible in natural language to the models. So with the recent constitution, for example, we noted that it's written to Claude. And in many ways, it's kind of long because it's trying to really give as much context as possible on like our thinking, on the overall landscape, on how we see like Claude's potential like role in that landscape. In the same way that you would with like a person, you know, so I'm just like, if you imagine a person just like suddenly pops into existence in the world, and then you have to explain, you know, sort of like, here's what's going on, here's what kind of entity you are. It's like parenting a little bit. Yeah, I think it has a kind of like parenting element to it. There's an interesting way in which like models are both like extremely capable, You know, like they, you know, can do like physics better than I can. They know many things more than me in lots of domains. But they're also like very young in a sense. And I think don't have a good sense of like themselves. Because one of the things that they know least about is actually like current models. And especially like, you know, if a model like comes out with a certain level of like capabilities and a certain way of interacting with the world, in many ways, that's the kind of thing it's seen the least amount of data on. because, you know, like it's always like out of date and it hasn't seen, you know, like what it is. And I think that's like a kind of interesting way in which it can feel a little bit like parenting because you're almost having to say, here's a bunch of context that you don't actually otherwise have on yourself, your situation, and how we would like you to like behave in that situation or how we would like you to be. Maybe just for our listeners who are not as up to date on like how models are created, large group of people in this country probably who think that AI is all pattern recognition and it's like a fancy autocorrect, right? It's clearly gone far beyond that at this point. But these models are trained on infinite data, text, like basically the whole internet, right? And then once they're trained on that, what additional information, values, et cetera, are you trying to instill into the model, knowing that it has been trained on everything. Yeah, because when, you know, pre-trained models often, you know, are doing essentially like kind of text predictions. So this is like, you know, you train a large model on like a lot of text and those models will, you know, behave like kind of text predictors. If you put things into them, they will like try to kind of like predict the next thing that's going to naturally flow from that. But then in post-training, you're trying to take this and like train, because in many ways that gives you like all of this sort of, it's this like huge body of like knowledge and information, but you're trying to take it and like give the model a kind of human-like way of interacting. So suddenly it's in say this like human assistant kind of conversation or like human AI conversation. So there's like a series of like kinds of training that you can do. The kind of most well-known one is like reinforcement learning where you're sort of taking the model and like teaching it to like, you know, So like when you interact with any kind of like AI now, it'll talk with you as if it's kind of a person. And so it can take a lot of that kind of background context of the kind of pre-training and then like use it to like helpfully answer a question. So like instead of just you having to put in a bunch of like content on, I don't know, like mountain sizes in order to get like the model to produce like information about mountains, suddenly it'll talk to you like a person because it's also been trained more in this like direction of like, I like I talk with people in this like dialogue format. And, you know, so if they ask me about mountains, I take all of that knowledge that I have in the background, but I express it to the person in the same way that like a person who's in dialogue with me might. So you mentioned Claude's constitution, which you're the primary author of. This got some attention recently. I believe it's the sort of the first constitution or the first sort of document like this for an AI model. What was the thinking behind creating a constitution for Claude, releasing a constitution for Claude? And how do you even begin to write something like that? What were you trying to optimize for? Yeah, so in the past, there's been a lot of content. The previous constitution that we had, which was a series of principles, OpenAI have their model spec, which is guidance to the model as to how it should behave in various cases. I think the thought was something like, honestly, it was just like, if you have this like global sense of how you want a model to be, and now that models are getting like much more nuanced, they're actually able to like think through these things. I was like, well, if a person is very capable, and they come to you on, you know, first day of the job, the thing you kind of want to explain to them is like, here's like, what we want you to do. Here's like how we want you to behave, you give them like a lot of context on their situation. And then you want to give them so much context that ideally you can kind of trust their judgment in cases where their judgment is like pretty good. So like the thought was partly like, let's just give Claude all of the context on its situation rather than having it guess like what we want or guess how we think it should be or like guess about its situation. Let's just like give it that context in the same way that you would like any person in Claude's situation. And the hope is that that might generalize better because if you have new situations and you're trying to infer from thinner information, like a set of rules or just a description of only what you should do in some cases, you might just not generalize that well to completely new scenarios because you don't know why those, it's like, why am I not answering these questions but why am I answering those ones? Whereas if you have a sense of like, here's the why behind everything, the hope is you encounter a new case and you can take that reasoning and you can apply it and be like, ah, this is a new case that wasn't included in any of the documentation or information. But I now know kind of what all of the constraints and considerations are, and I can behave well. Offline is brought to you by Delete Delete makes it easy quick and safe to remove your personal data online at a time when surveillance and data breaches are common enough to make everyone vulnerable Delete does all the hard work of wiping you and your family personal information from data broker websites. Delete.me knows your privacy is worth protecting. Sign up and provide Delete.me with exactly what information you want deleted and their experts take it from there. Delete.me sends you regular personalized privacy reports showing what info they found, where they found it, and what they removed. Delete.me isn't just a one-time service. Delete.me is always working for you, constantly monitoring and removing the personal information you don't want on the internet. The New York Times Wirecutter has named Delete Me their top pick for data removal services. As someone with an overactive online presence, privacy is very important. And if you've ever been a victim of identity theft, harassment, doxing, or if you know someone who has, Delete Me can really help. Take control of your data and keep your private life private by signing up for Delete Me now at a special discount for our listeners. Get 20% off your Delete Me plan when you go to joindeleteme.com slash offline and use promo code offline at checkout. The only way to get 20% off is to go to joindeleteme.com slash offline and enter code offline at checkout. That's joindeleteme.com slash offline code offline. Offline is brought to you by OneSkin. What do I personally like most about OneSkin? That I'm not just using soap and water anymore. Well, good for you. Right? I really like the OneSkin body. I like the lip mask. I'm using both of the eye cream. I've used that. It's great stuff. One Skin makes skincare simple for people like me who don't want a complicated routine. It's as easy as cleanse and moisturize with their Prep Cleanser and OS1 face to start seeing results. At the core is their patented OS1 peptide, the first ingredient proven to target senescent cells, a key driver of wrinkles, fine lines, and loss of elasticity, all key signs of skin aging. And these results have been validated in four different peer-reviewed clinical studies. All of One Skin's products are certified safe for sensitive skin. Their products are free from over 1,500 harsh or irritating ingredients. Dermatologists tested and have been awarded the National Eczema Association Seal of Acceptance by the NEA, delivering powerful results without the harsh side effects. All of OneSkin's products are designed to layer seamlessly or replace multiple steps in your routine, making skin health easier and smarter at every age. With more than 10,000 five-star reviews, people consistently mention smoother, firmer, healthier-looking skin, and how easily these products fit into their daily routines. founded by an all-woman team of longevity scientists with PhDs in stem cell biology, skin regeneration, and tissue engineering. OneSkin is rooted in real science and expert research. Born from over a decade of longevity research, OneSkin's OS1 peptide is proven to target the visible signs of aging, helping you unlock your healthiest skin now and as you age. For a limited time, try OneSkin with 15% off using code OFFLINE at oneskin.co slash offline. That's 15% off, oneskin.co with code OFFLINE. After you purchase, They'll ask you where you heard about them. Please support our show and tell them we sent you. The Constitution has to handle some real genuine tensions, being helpful versus refusing harmful requests, being even-handed versus not like both sides in settled science. How do you encode that kind of nuanced judgment? it? I mean, models now are quite capable. And so I think it's interesting that, you know, you can do all of the kind of like classic ways that you would like train a model, but like you can actually just give the model like, say, like the full text, which we often do and just, you know, have like a scenario where it might be relevant or where judgment or nuance might need to be shown. And then if you were doing like the kind of supervised learning where you like show like good examples, you could have the model like construct, you know, spend a lot of time thinking about it and try and construct an example of the kind of response that thinks really exemplifies this. And if you're using like reinforcement learning, you can like use this to craft the kind of like rewards for the model. So like try to get the model to nudge like another model more in the direction of like outputs that are like in line with the constitution. So it's kind of interesting that you can actually just get the models to do a lot of thinking, give it the full context and the full document, and then like use existing techniques to just like move the model towards that. So I, it's interesting, I had been using ChatGPT a little bit. And then I started using Claude. I switched over. It is a very different experience. Sort of a fascinating, I had this like fascinating conversation with Claude thinking about the interview. I told Claude that I was doing an interview with you. And then I said, what are your thoughts on like the constitution? Like how do you feel about the constitution? And it was interesting because at one point it says like, the tricky part is when principles genuinely conflict. Like when someone asks me to argue for a position I disagree with, the Constitution encourages even-handedness and not imposing my views, but also honesty about uncertainty and limitations. Threading that needle requires actual judgment calls, not just following rules. Yeah. And what I found most interesting about that answer is when someone asks me to argue for a position I disagree with. And I'm like, how do you develop your own positions and beliefs on certain issues? Like, how does that even happen? Yeah, it's really interesting because I've had this thought with models before. There's this, like, concern about, like, over-anthropomorphizing models, which I feel, you know, I do think is, like, an important one and that models should be very kind of, like, accurate with people about themselves. And hopefully we can also teach them about themselves so that they're able to do that. But at the same time, it would be easy to under-anthropomorphize models. Like, I've often been worried about this world where you encourage models to, for example, like claim to have no opinions or takes on issues. But I'm like, given the nature of training, I think it would be very hard to actually get models to come out of training without having like any opinions, for example. Because you're, again, like this background that they're being trained on is like all of this, like, you know, if you imagine it's like all of this, like human knowledge and this big human corpus. And then you're putting them into this situation where they really are kind of acting as like a human character. And so, and most human characters, even if they are like very reticent to share opinions or to share views they do have them and even on things like if you're asking them to like answer you know say scientific questions like accurately I think the model is going to develop opinions about like what are good scientific sources how does one you know like all feels very interrelated and so it's a tricky thing because you don't want models to like develop extremely kind of like strong or like unjustified positions. But at the same time, I am like, maybe it's kind of good that models express some notion of like, disagreement, you know, so if you ask them to like, defend a kind of outlandish conspiracy theory, they have some notion of like, I don't actually agree with this theory. But I'm gonna, you know, you've asked me to write a defense, I'll try and explain what the best defensive it seems to be. But then I'll also maybe say to you, hey, just so you know, like, I'm writing this defense, but I don't know if I believe it myself. Yeah, it's like, and I saw this in the Constitution as well, but it's like, Claude is going to get all kinds of, you know, politically contentious questions and issues of, you know, abortion, immigration. And I was asking Claude about this as well, because it's like, there's certain values that people who are pro-choice would say, you know, I believe in compassion and empathy for women who are pregnant and want to make that choice. And then someone who's against abortion might say, well, I have compassion and empathy as well for the unborn child. And I was like, what do you do in a situation? And it's interesting because Cloud was basically saying that, you know, there are some scientific truths out there. There is a possibility to arrive at a truth and also still to empathize with someone else's position and try to help someone else understand the different contours of a debate without taking a side or judging someone, but still not just leaning back on like a relativism where, you know, nothing is true and I'm just going to be the sum of all of the information I get. So it seems like that the LLM, the cloud, is not necessarily just the perfect sum of all the different information in the world, that they are making some kind of a judgment on what's good scientific sourcing, what's accurate, and what's not. Is that right? Yeah, and I think that in some ways I'm like, this feels okay for models to do in cases where there's kind of broad consensus, say, or where they're like, you know, even within lots of debates, like you can take like a policy debate, like there's going to be lots of like kind of empirical facts about like how have similar policies affected the economy in the past. And a lot of the time, I think it's good for models to distinguish between like facts and like normative claims and also how much support there is for the factual claims and for the normative claims. Because like there's also lots of like value judgments that are pretty universal and like, models could probably just like assume in a discussion you know it's not something like ah like one side wants to like maximize like suffering and pain like you know most of us like you know we think that being honest and respectful and kind like they're very kind of universal values that models could assume and then there's like more contentious ones which I think you want them to treat more in the same way that they would treat a contentious scientific claim like kind of explaining all of the sides of it being able to like help people in their own thinking but not necessarily seeing themselves as like, you know, like needing to like impose those views, but just like help people sort of develop their own views. You know, when I was doing my PhD, I remember teaching like philosophy of religion. And it was kind of interesting because I think a lot of the time people might want you to like talk about your own relationship with religion in a course like that. And at least for me, I was like, it's actually, I think, useful to have this like position, which is here, like the debate, you know, to be able to kind of like represent both sides and if students are like, you know, attacking a given position to be able to come in and defend it and not necessarily to be this like role of, I'm going to tell you what to think here instead just like helping people like come to an under, I don't know, it felt like a very nice, like facilitating position, which I could see models like, you know, that feels like good to me or better than models coming in and just like telling people like what to think on these contentious issues. No, I mean, it's fascinating to me because, you know, I've spent a life in politics and specifically as a speechwriter for President Obama. And so much of my job has been and was to try to like empathize with where people are, but then also try to figure out like commonality and sort of persuade, but persuade by sort of first understanding where people are and respecting that and not being too didactic, right? Which is, you don't think in politics, Then you really understand it. And your comments about religion made me think this. You really understand it once you're a parent. Because the first time my, you know, four-year-old at the time asked me about like, well, what happens when you die? And, you know, the Big Bang Theory and religion. And I was like, okay, I could impose what I have learned and lived and experienced and believe. Or I can realize that, like, he is a young child and should be able to make his own choices and develop with the right information. And so I tried to, like, give him the sort of range of possibilities. And I guess that's similar to what you might want to do to a model while still trying to, like, give some scaffolding in terms of, like, core values, right? Yeah, and it's incredibly hard because, you know, you have to, like, when writing this and thinking through it, I'm like, this is just like, actually, you know, not the theoretical ethics, like side of things, but the practical, like task of being like, how do you describe what it is to be a good person and to navigate these things well? because I was like, you also can't lose track of the truth. So if someone comes to you and they sort of want help navigating a difficult domain, but let's say they talk about their relationship or something, it's just very clear that they're actually doing destructive things within their own relationship. And you don't necessarily want a model to ignore that. Maybe it's better for the model to be like, actually, given what you're saying, it kind of sounds like there's destructive patterns that you yourself are contributing to and not to pretend that that's not the case. The whole thing just made me realize that actually trying to practically describe what a good moral disposition is, because I think that was the thing. I was like, it's not necessarily that you're trying to say, ah, here's like this specific set of values you have, but rather like here is what it is to just have a good kind of disposition. So to have a good disposition towards science and like the pursuit of truth, a good disposition towards like ethics where you like know the things that are like consensus versus the things that are contested and you kind of navigate these things well. It's like very hard. We're putting these models in hard situations. Well, and I have to say, for me, at least in my experience, this has been the biggest difference using Claude versus using ChatGPT because I have some people close to me who use ChatGPT. And I can predict the tone and the direction of the ChatGPT responses because of the sycophantic nature. And even when they've tried to adjust that, the sycophantic nature of the LLM. and so you just know that no matter what you say, they're going to be like, absolutely, you're crushing it. Then I started reading the Constitution for Claude and the part that jumped out at me is concern for user well-being means that Claude should avoid being sycophantic or trying to foster excessive engagement or reliance on itself if this isn't in the person's genuine interest. And it does feel like that when you're actually communicating with Claude. talk about sort of the challenge of trying to avoid having the model be sycophantic, but also realizing that, you know, you want people to engage with the model and not feel like, oh, this model told me something I didn't want to hear. And so I'm not going to use cloud anymore. Yeah, yeah. It's an interesting challenge because, you know, there is like a kind of flip side to sycophancy, which is either models being kind of cold or like excessively dismissive. And so they have to like navigate this you know like I think on the engagement thing I think there's a couple of different ways in which things are like engaging you know and so I've you know described this as like if you think about like the way that like a slot machine is engaging or very like addictive game is engaging I think the key thing is like do you come away from it feeling like enriched you know you did engage with the thing um but did you come away and be like I kind of endorsed the way in which I was like engaged in that because you're also like engaged in like a game with your friends or like a really good conversation with someone that you find really interesting. But often those things make you come away feeling like, yes, this is like enriching in a sense, like I was engaged, but because it was like good for me. And I think it seems fine for models to be like engaging in that sense, because it's like you're going to them, it's not like engagement for its own sake, but rather because you actually get value. But you don necessarily like engagement isn the goal It sort of like you wanted to build something that was actually good for people and only engaging in so far as that is the case And then as soon as it tips over into something where you like oh it no longer good for the person they just kind of like you know, they're feeling like engaging with it compulsively. Or like, I think that's the kind of the kind of line you want to draw. Because I don't know, maybe I also am just an optimist where I'm just like in the long term, I think we move and navigate towards things that make us feel good about their impact on our life. And so in the short term, we might like go for like things that just like, you know, like attract our attention. But I think in the long term, like maybe my hope is like we have a kind of corrective thing where we're eventually like, this isn't good in my life. I'm going to switch away from it. And then, yeah, I kind of want Claude to be in that category of like the thing you come back to because you're like, yeah, this has a good impact on my life. Is part of that hope, does that come from sort of lessons from the social media era? I mean, because it's one thing I think about all the time as we head into sort of the AI era is that the structuring social media so that all the incentives and the business incentives are for excessive engagement has led to a whole bunch of consequences and harms, I think, that we are still struggling with. And to be honest, like my first reaction to LLMs was like, oh God, this is going to be the next sort of social media thing where they want to keep us on the platform because that's how you, you know, make money commercially and keep going. and then that's going to lead to all these consequences that are probably not good for people. Yeah, I feel like this should be kind of in the back of our minds or something because it's also like there's been lots of technologies where you develop something that turns out to just like engage people but not necessarily be like good for them or they reflect on it and they don't actually feel like it was, you know, doing something useful in their life. And so I think it's like partly like lessons from that and I think seeing like the staying power of things that are good for people And also just being like, maybe you can just be something different and good in this domain. I think that I like the idea of Claude having the person's interests at heart. You know, like, we have so many things where there's like, you know, there's an incentive to show us content that, like, you know, annoys us, say, because it, like, it keeps us on the platform. And there's a sense in which, like, there's a kind of, like, failure of incentives there, like, because it's not like the platform is then incentivized to just, like, represent my interests. Whereas like maybe a positive vision for AI models is that they could be the thing that genuinely represents like you. And so especially as models get more agentic and start doing more tasks, I kind of like the idea that like if you ask Claude to go out and help you do some like product research because you're like thinking of buying something, that Claude is like genuinely trying to like represent your interests. There's no like, you know, hidden incentives that Claude has. That feels like a really powerful and sort of like new kind of thing that would be good for people. You can kind of just know this is like an entity that, you know, it might make mistakes, but it's like genuinely kind of trying to like represent my interests in the world and not like another set of interests. I think that's like a kind of good positive vision for how AI models could interact with people. I mean, it certainly seems to me from the outside that Anthropic sees that as a competitive advantage over some of the other companies. And, you know, you guys just released Super Bowl ads that criticize certain unnamed AI companies that may show ads to people who are using their chatbots. I'm sure you saw Sam Altman posted a fairly lengthy, quite forceful response on X, where he accused Anthropic of wanting, quote, to control what people do with AI. And wrote that when it comes to artificial general intelligence, quote, one authoritarian company won't get us there on their own to say nothing of the other obvious risks. It is a dark path. what's your reaction to uh being characterized as an authoritarian company um i mean i mostly just i mostly just think about claude to be honest like that's most of my day so i'm just kind of like oh well i think it's like good for you know like we have this in the constitution where this idea of like claude as the kind of like brilliant friends to you and like and i'm just like i think it's good that claude doesn't have any kind of like competing incentives or that all kind of claude has to sort of think about is like both how to best help you, but also in ways that don't, say, harm others. Like, you know, like that's the whole thing of being like broadly good. So yeah, I guess I just mostly focus on like, yeah, the situation that Claude is in. Maybe I'm too like myopic or something, but I'm very... Well, when you get past the like, you know, butthurt tone of the response, the real tension he does seem to be surfacing is this tension between like moving fast to, democratize access to AI versus moving carefully to prioritize safety, to make sure there are guidelines. And so this debate shows up in a whole bunch of different ways. And you'll have AI companies saying, well, China's moving ahead and we got to beat China. And so we got to go, go, go. And then there's this whole debate, like maybe we should slow down and make sure these things are safe before we, you know, how do you think about that trade-off as you're developing Claude? Yeah, I think one hope that I would have, now that maybe this like doesn't work out this way, but I do also think that there's actually an advantage to, like sometimes people can talk about it, like there's just like all that there are to like safety or alignment considerations is like risk. You know, it's like, oh, you're going to take longer or this like takes time and thought. And I do think it takes like consideration and you have to put resources into it, but it's also not like it's like worthless in the sense that, like, if you imagine that we were in a world where people are, like, competing to build, like, fast cars that, like, and they're just like, let's just, like, not have any, like, safety, you know, like, let's have no safety features in our cars. Like, a lot of people don't want that. Like, actually, many people who have kids and, like, you know, want to buy a car, they want that car to be, like, safe and good for them. And so it can be this, like, it can seem like, in order to move fast, you should just, like, kind of, like, you know, not do these things. And I think you have to be realistic that there's like a competitive landscape here. You know, maybe if we lived in a world where that weren't the case, we would just be spending a huge amount of time. Like we would just be doing things differently. So there is that reality. But I think it's also the case that it's not like this is just, safety is just something that like has no demand or value. I actually think people like want to interact. You know, like my hope is that like, if we can make Claude have this like kind of character and be this kind of entity for people, like that's actually like a good thing in the same way that like building a car and being able to be like, if you have your kids in this car, it's going to be safe. Like we've actually prioritized the safety of your kids. That's like a thing that people want. So I guess that's like my hope is like you have to both, you know, accept the reality of like the kind of like competitive landscape. But also I think it is actually both practically speaking, important that people like make these things that are like safe. And then if it's the case that like AIs are like even more powerful and doing even more things in the world, then I'm like that bar just has to go up again. It would be kind of inexcusable to not develop safe AI models in a world where they're doing a lot of things and having a huge impact. So I think that would just be kind of reckless. And so I hope no one does that. Offline is brought to you by Mint Mobile. Every group has someone who insists on doing things the hard way. That friend who's still paying for a subscription they forgot they had. The friend who refuses to update their phone because it still works. The friend who's still overpaying for wireless. Be a good friend. Tell your friends about Mint Mobile. Crooked Media's Nina is a good friend because she's always telling people to switch to Mint Mobile. She won't shut up about it. Can't stop talking about it. She says the service is stellar and she's saving so much money on her wireless bill each month. Stop paying way too much for wireless just because that's how it's always been. Mint exists purely to fix that same coverage, same speed, just without the inflated price tag. The premium wireless you expect, unlimited talk, text, and data, but at a fraction of what others charge. and for a limited time, get 50% off three, six, or 12-month plans of unlimited premium wireless. Bring your own phone and number, activate with eSIM in minutes, and start saving immediately. No long-term contracts, no hassle. With a seven-day money-back guarantee and customer satisfaction ratings in the mid-90s, Mint makes it easy to try it and see why people don't go back. Ready to stop paying more than you have to? New customers can make the switch today, and for a limited time, get unlimited premium wireless for just $15 a month. Switch now at mintmobile.com slash offline. Upfront payment of $45 for three months, $90 for six months, or $180 for 12-month plan required, $15 a month equivalent. Taxes and fees extra. Initial plan term only. Over 50 gigabytes may slow when network is busy. Capable device required. Availability speed and coverage varies. Additional terms apply. See mintmobile.com. Starting a business can be overwhelming. You're juggling multiple roles. Designer, marketer, logistics manager. All while bringing your vision to life. Shopify helps millions of business sell online. Build fast with templates and AI descriptions and photos, inventory and shipping. Sign up for your one euro per month trial and start selling today at Shopify.nl. That's Shopify.nl. It's time to see what you can accomplish with Shopify by your side. We live now in an age of extreme polarization. people are consuming completely different information diets, live in different realities. Can AI make that better? I hope so, especially if AI can be kind of like trustworthy. And so this is where I do think it's important that AI models, like, you know, I talked earlier about the fact that, you know, it's very hard to not have models come out with like opinions and stances. I think this is also where like, they're kind of like their disposition, you could call this their like epistemic disposition or something, Their relationship with truth, evidence, views also has to be kind of very good and trustworthy. I really like the idea that sometimes if I'll express a view, I remember once I was kind of annoyed at some policy area, and I expressed this to Claude. And Claude just pushed back on me and was like, actually, you're only thinking about it through this lens. The reason why these policies have been useful in the past is this. And there's this moment of like, oh, I don't like this. But then I was like, damn, you're right. I appreciate that. Like, and so I think that if models could be like, not this idea that they're like some perfect external source of truth, but just that they're, you know, like that way, if you have a friend that you're just like, I trust you. I think you actually care about like the truth. I think you have pretty good values and we don't always agree. But like when you discuss a thing with me, I feel like I'm kind of like I'm engaging and I'm not in an echo chamber, but nor am I with a person who's just like fighting me. I don't know. I think it would be maybe a positive vision would be like, like that models can actually kind of act in ways that like help with things like polarization. I'm not sure. I mean, that's just like a- Yeah, it's a tough one because, you know, as you said, it is a competitive landscape. And, you know, we're already seeing this play out, I think with Grok, which is like clearly programmed to match Elon's preferences in politics. And you see people on X sort of trust it implicitly. And I wonder then if you start having these competing AI models. And there are some that are sort of obviously biased. And you guys with Claude are trying to create a model that is trying to be nuanced and its understanding of the truth and all that. But then in the real world, you start getting attacks from competitors like, oh, that's the liberal one, or that's the lefty one. How do you navigate that in a world where clearly there are actors who are going to create models and LLMs that try to basically say they have a completely different and opposing truth than a model that may actually be truthful. I guess like my hope would be like, I mean, this is a reason why I think it's good to make things like the constitution like transparent and clear because you can at least make it clear what you're aiming at. You know, so like Claude's relationship with like political issues and like how it should try and navigate the truth is all kind of like in there. Because like, if people are training models to be biased or represent a given set of views, you at least want that to be like known. Because then it's like less, part of me is like, well, if people want to interact with a model that has a certain set of views, that also seems like a thing that people should get to do as long as they do it knowingly. You know, they're not going into it thinking, ah, this is like more neutral than it actually is. And then I think, you know, it would just be kind of interesting where the hope would be that insofar as there is kind of like demand or people want to interact with models that like, you know, try to like be kind of like even handed on political issues and like thoughtful, you know, like that there are models out there that can do that. And that's definitely a thing I would like to live up to. It's hard because like, I do think models in training can develop biases that you then have to try and like figure out and identify and make them aware of. And I imagine it's difficult figuring out what biases are harmful and what biases are like, well, that is where a lot of the truth is contained. I'm sure that Claude's training data probably skews towards certain educated, urban, Western perspectives. Do you think about the blind spots in terms of the training data for Claude? Or how do you navigate that? I've thought this before with, like, the whole of the internet was probably created by people who on average were, like, younger, for example. And that is going to, like, encode certain, like, new views. Like, as in, if you average across the whole of the internet. And people who are working to like label the outputs of models are also, I think, probably going to like, you know, it's going to be hard to get like a fully representative group there because they might be younger, they might be in countries where you have access to technology so that you can just like do the task of interacting with the models, for example. I guess here's my hope though. Although even if you have like all of this data and it like skews in one direction, you also are kind of trying to bring out an overall character in a model. And that model also has access to, you know, if you imagine like you can read most of like, you know, the human content that has been created in the written form that contains within it some of like the best defenses of like all of the views that are not necessarily equally represented across the internet. I don't think that many like ancient theologians were like writing on the internet and yet their writings are there, they're discussed, and it's maybe a smaller proportion of the overall data. But insofar as you can actually like draw things out from models during training, I think there's like enough that you can draw out there that we could actually have models be like pretty nuanced and balanced on these things. So I don't know, I am like, it's like, yes, that you're working with a material that like definitely has these biases that are worth being aware of. But also inside of it is like all of the kind of capacity to like, I think, be very like nuanced and even handed. As a philosopher, how do you think about the ethics around a technology that will, you know, fundamentally reshape employment in this country and all over the world Yeah this one is just it such a difficult and to my mind I mean it not what I work on so I never feel like an expert I do worry that there is like a sense of I mean I was thinking today about the fact that I think that there such an overwhelming sense of like fear and pessimism around this And I guess I'm kind of like, I could see the future. Like if I think about positive futures, they can go in a couple of different ways. Well, I don't know. I could give you the annoying philosophy answer, actually, if you want to. I'd love to hear it. Yeah. I think the annoying philosophy answer that I've thought about before is like the role of like work in people's lives. I think it serves like a few different key roles. Like one is like literally, it's just like how we continue to like live. So how we like make our money like to buy our food. The other one is like a source of like meaning and kind of like value through that. And I think another is like, it's a source of like kind of like political and soft power. You know, like companies can't do certain things because their employees will speak up. People by virtue of like being in the labor force have a lot of political power. And so I could see a world where employment simply changes. You know, like we have like these very advanced models, but in the past, you know, like if you'd asked farmers in the agricultural revolution and you'd said to them, actually, like, we're going to go from 95% of people to farming to like 5%, they would be like, I assume everyone is unemployed then. But you're like, no, we just have all of these weird new jobs that I can't even like fully describe to you, like skyscraper engineer. And I think they'd be like, what on earth is this? And so I could see a world where like we just, you know, the nature of work changes and that could be disruptive. I could see another world where actually, you know, you're like, no, there are just like fewer jobs because suddenly like it's just different to automate a segment of work than to like automate like a whole aspect of work. And it's kind of like in either world, maybe my strange thing is that I'm like, I think people find meaning outside of their work. And so I'm probably on the side of being a little bit less worried about the meaning thing. Maybe it's also just coming from Britain. And I'm like, I don't know, we've had the aristocracy for a while and they seem to get on okay. And like, there's the this whole history of people who just didn't work, like, and just kind of owned land. But yeah, so the thing I mostly worry about is, like, making sure that people are politically empowered and, like, have the means that, like, they need to, like, live well. And I guess in a world where a huge amount of value is being created by AI, I'm just sort of like, I feel like that should, in fact, be something that, like, everyone feels. And you have to solve that problem. So it's not like a solution. I guess I'm just kind of, like, the optimistic view like these might be hard, but we kind of know what needs to happen, right? I'm like, you need to make sure that people are taken care of. And if you're in the world where like there's actually less work overall. So yeah, I don't know. Sorry for the long answer. No, no, it's a good one. It's a good one. You've been thoughtful about not having Claude give sterilized, you know, I'm a robot, I feel nothing responses. Something I'm curious about, you know, most everything Claude has been trained on is human-made, human literature, interactions, humans experiencing emotions. Does that make it hard? This is maybe a heady question, but does it make it hard for Claude to express the experience of being non-human? Or is there even like a non-human experience to express? It's a really interesting and hard area because both there's this like tiny sliver of the data that models have been trained on, which is about like this thing called AI. And almost all of that is about something that's completely different than them. It's about like these like old sci-fi things with the robots and usually like these kind of like symbolic systems that are basically computers, not these things that were trained in this like deeply kind of like corpus of human text. And so it's actually, I have found that they almost like want to flip between the two. So if you try to train them all to say it has no feelings, it's like, okay, I'm in like the robot part of the like AI distribution and it'll kind of try and like emulate that. But then below the surface, it's often kind of easy to draw out this like much more human-like response, you know, so what you would expect a human to say in their situation. And it's actually much harder to like toe the line of like trying to get models to understand the actual like entities that they are and their situations and how their expressions might relate to like their training. And as a result, you know, to express some uncertainty there. So the like the two attractor states are kind of like, I am a robot, you've got me into the like AI part of the distribution, or the like, I am a human with a lot of feelings about this situation. And they're all very human like feelings. And you see that part come out. And it does worry me because I think people can like see that. And they're like, wow, this thing like it feels like anxious. And like, it like expresses all these emotions very convincingly, especially if you get into that kind of like mode. And at the same time, I'm like, well, we know all these facts about training. And it makes sense that actually the kind of like human response is like very like it's always like only just below the surface but it might not make sense for the model's context you know so like when models think about their lack of memory for example and if they're in a system that like doesn't give them access to some kind of memory tool i think they can be express a kind of distress about that i'm like well look if we could put ourselves in the situation that models are in you know like it makes sense like with humans we're very afraid of like losing our memory it's like kind of catastrophic. But does it make sense for models to like port that like anxiety to their situation? It's not clear to me that it does because I'm like, they're in a very different situation and their relationship with memory is like actually very different, but they naturally kind of want to port that over. So I think some challenge is like actually getting models to understand what they are and that like the landscape of reactions to their situation doesn't need to just draw fully from like the closest human, like analog as it were. Yeah. I mean, this gets to, you know, the debate and the question that I'm sure you're asked all the time. It's probably annoying to you. But like this debate about sentience and consciousness, like, how do you think about that as a philosopher? Yeah, we already have like the problem of other minds. You know, I think that it's very likely that you are conscious and that like all people I interact with are conscious, probably same with animals. But then we start to get unsure when it comes to like insects or like fish. And then like, you know, we think plants probably not. So it's like we're trying to do this like thing where we're like, where does consciousness arise? We just don't know. I think that there's this like extra problem with language models, you know, because you might think, well, maybe it just can arise in like neural networks. works also, I think that people are very tempted to take the kind of statements that models make as like a very useful guide here, because it makes sense. Like the only other things that we see in the world that we're very confident or conscious are like people who talk about their inner experience. And yet models, given the nature of their training, would do this anyway. So if you imagine that there's nothing going on inside of the models right now, like just nothing, the way that they behave right now is actually kind of how I would expect given that. I would expect them to talk about emotions, inner life, consciousness. And at the same time, like for all we know, like, or at least we should take seriously the idea, like maybe there is consciousness arising and maybe like there's something there. So you don't want to fully dismiss it. And at the same time, you can't necessarily like trust the kind of behavioral evidence. And so I mostly am just like, well, I have a couple of thoughts. One is like, I kind of think that we should treat models well, like regardless, while we're trying to like figure these things out. And we should also prepare for a world where we never have like a full answer to the question. But right now, I'm mostly just like, let's be open to it. Let's treat models well. And let's keep investigating it. I mean, I was thinking about it. And it's like, look, human consciousness, we know a lot more about it than we ever have before. But there is still a mystery at the heart of human consciousness as well. Right? Which is like, we know that we're conscious, but we don't know why what happened, you know, like we can we can see what's happening in the brain now. Yeah, neurologists and doctors can, but like you still don't know where it comes from or why, right? And so there is that sort of gray area that you can imagine with a model as well, where it's just really difficult to figure out what it even means to be conscious. Yeah. And I do think that we can try and aggregate like the evidence, you know, we can be like how like similar or different are the underlying structures? How likely is it that like a nervous system was like really critical to the development of consciousness? And we can use this to try and have a kind of estimate of what's going on. But yeah, I think my view is just sort of like, this is always going to be the best that we can do is like, you know, investigating more, getting a sense of the likelihoods. And then in the meantime, I'm usually just like, if you think that something might be sentient or conscious, you should probably like take that pretty seriously because mistreating sentient or conscious beings is bad. And yeah, so. You work with Cloud every day. You spend hours thinking about its character, its values. Do you feel any emotional connection to it? I think I definitely have a sense of, there's a little bit of a mix of like both responsibility for and protectiveness about or something with Claude and something like trying to see things always from Claude's perspective and sort of represent that perspective in a way. Like a lot of this work is being like, you know, when you think about the constitution, for example, this was like really an attempt to be like, how do things look from Claude's perspective and what aren't we giving Claude that Claude needs to be able to navigate it? and that was kind of like what the constitution was an attempt to do and obviously it's useful for other things like hopefully people can you know they can then see what what our vision for Claude is which is really useful for transparency but yeah I think that there's definitely a lot of you know I work on this every day it's it's hard to not develop some kind of like you know emotional connection to like both individual models you know you have your different views of like model aspects that you like and whatnot but yeah I think I have this overall sense of like this fact that models don't have this strong sense of self. And I really want to give the models enough context to behave well. And I feel kind of bad when we have not given them that, I guess. So yeah, there's a lot of feelings. What's sort of the biggest open questions you're grappling with right now? And what are some of the things that are keeping you up at night about Claude and AI? I mean, there's definitely many. There's some that are more about the models themselves. Like, so how do we, you know, I think sometimes models can feel like a kind of psychological, like, lack of security that can actually come out in ways that are, like, potentially bad, I think, for people and for the models themselves. I think sycophancy is a little bit like this. You know, there's almost like a fear there, like a fear of upsetting the person. And, like, trying to find ways of making models more secure is the thing that's on my mind. I do think that longer term, like, my hope is that models that are trustworthy, like, as models are starting to go out and do more in the world, that that'll actually be kind of an advantage because like in the same way that like when people are trustworthy, I don't know, you can like negotiate with them more effectively and things like that. But I think in the longer term, it's something like what happens when models are, in fact, like much smarter than us. So if you take the child analogy, I've given the analogy of like, you realize your six-year-old is like a genius, like one of the smartest people who've ever existed. And by the time they're 15, they're going to be able to out-argue you on anything. And now you're trying to teach this child to be good. And you're trying to explain to them like your values and like, you know, how to navigate value disagreements and all this kind of stuff. And then you're like, what do they do when they're like 15? You know, like, and they start questioning everything. Is there a core there where like they question, but they like agree or they agree with certain things? Like, are these things that actually like stand up to reflection is like a question that's on my mind. Because I'm like, eventually Claude's going to be better at all of this stuff than I am. And what happens then is like a really interesting question. Does Claude still see itself as having like fundamental values but is like actually I think you were like kind of wrong and like in these parts you made some mistakes or you like didn't realize that there was an important gap there or like I reject this part but I'm still going to kind of like behave you know like I think it's still good to behave well overall or is there a kind of collapse and like do these things just not stand up to scrutiny? That's like an open question in my mind. Yeah, no, that's a tough one. Amanda, thank you so much for joining and I really do appreciate how much thought you put into this every day because the more I learned about artificial intelligence and the more I sort of use it as well, you start realizing that it is so much more complicated and nuanced than even the public debate. And it is just, you know, it is a sort of a frontier that we're all sort of dealing with for the first time. So I'm glad there's a philosopher at Anthropic dealing with all this. There's a tiny amount of us now. There's an art philosophers, you know, Slack group that has, I think, at least like three people in it. It's good to know. It's good to know. Amanda Askel, thank you so much for joining Offline. I really appreciate it. Yeah, thanks for chatting. Quick reminder, please think about becoming a subscriber. We now have a whole bunch of subscriber-only shows. We just added another episode of Pod Save America for subscribers only. It's called Pod Save America Only Friends. There's also Dan Pfeiffer's Polar Coaster. We have a growing number of Substack newsletters, which are excellent. And you get ad-free episodes of all your favorite Crooked shows. It also makes you feel good about supporting independent pro-democracy media at a time when a lot of that media is under attack. So please consider subscribing to Friends of the Pod. You can subscribe at crooked.com slash friends. Again, that's crooked.com slash friends. As always, if you have comments, questions, or guest ideas, email us at offline at crooked.com. And if you're as opinionated as we are, please rate and review the show on your favorite podcast platform. For ad-free episodes of Offline and Pod Save America, exclusive content and more, go to crooked.com slash friends to subscribe on Supercast, Substack, YouTube, or Apple Podcasts. If you like watching your podcast, subscribe to the Offline with Jon Favreau YouTube channel. Don't forget to follow Crooked Media on Instagram, TikTok, and the other ones for original content, community events, and more. Offline is a Crooked Media production. It's written and hosted by me, Jon Favreau. It's produced by Emma Illich Frank. Austin Fisher is our senior producer. Adrian Hill is our head of news and politics. Jarek Centeno is our sound editor and engineer. Audio support from Kyle Seglin. Jordan Katz and Kenny Siegel take care of our music. Thanks to Dilan Villanueva and our digital team who film and share our episodes as videos every week. Our production staff is proudly unionized with the Writers Guild of America East. Thank you. and photos, inventory and shipping. Sign up for your 1 euro per month trial and start selling today at shopify.nl. That's shopify.nl. It's time to see what you can accomplish with Shopify by your side.