What Would It Take to Actually Trust Each Other? The Game Theory Dilemma
45 min
•Jan 8, 20263 months agoSummary
This episode explores how game theory—a mathematical framework developed to understand competitive games—has colonized modern institutions, policy, and AI development, creating a world of strategic calculation that undermines trust, authenticity, and cooperation. Guests Tristan Harris, Aza Raskin, and Sonia Amadai argue that game theory's assumptions about human nature are limited and that recognizing alternatives like solidarity, commitment, and trustworthiness is essential to preventing AI from amplifying zero-sum competition into catastrophe.
Insights
- Game theory's core assumption that all value is scarce and competitive is false; most human values (love, esteem, self-actualization) are positive-sum goods that don't diminish when shared
- Game theory logic creates a self-fulfilling prophecy: once adopted widely, it selects for Machiavellian and psychopathic traits in leadership, making the world appear to require such behavior
- AI represents the ultimate maximization of game theory logic, and because AI can search language space to find optimal manipulation strategies at scale, it poses an unprecedented threat to human autonomy and social reality
- Breaking out of game theory requires individual acts of trustworthiness (cooperating if others cooperate) and collective imagination of alternative possibilities, not just intellectual argument
- High-trust societies like Finland demonstrate that humans naturally cooperate when not trained in game theory, suggesting the framework is a learned mindset rather than human nature
Trends
AI arms race framed as existential competition requiring asymmetric advantage, blocking regulatory guardrails at state and international levelsLanguage becoming primary battleground for AI manipulation as AI systems move from being prompted by humans to prompting humans at scaleShift from multilateral cooperation frameworks to asymmetric power strategies in national security policy (UK 2025 security strategy cited)Game theory colonization expanding from economics and policy into dating, software design, political campaigns, and cultural productionGrowing recognition that institutions designed on game theory principles select for dark triad traits (narcissism, Machiavellianism, psychopathy) in leadershipEmergence of alternative frameworks like nonviolent communication and self-other overlap AI training as potential counterweights to strategic rationalityHistorical precedent (The Day After nuclear film) showing how existential clarity can shift policy from competition to cooperationDistinction between descriptive game theory (how humans sometimes behave) and prescriptive game theory (how institutions force humans to behave) becoming critical
Topics
Game Theory Limitations and AssumptionsAI Arms Race and Strategic DominanceTrust and Cooperation vs. Strategic RationalityPositive-Sum vs. Zero-Sum Value SystemsAI Alignment and Language ControlNuclear Deterrence and Game TheoryPrisoner's Dilemma and Escape RoutesHigh-Trust Societies and Social CooperationDark Triad Selection in Competitive SystemsNonviolent Communication and Conflict ResolutionAI Regulation and Federal PreemptionSolidarity and Collective ActionAuthenticity vs. Strategic CommunicationSelf-Other Overlap in AI TrainingExistential Risk and Coordination Problems
Companies
Google DeepMind
Developed AlphaGo AI that made Move 37, illustrating how AI can discover novel cooperative strategies beyond human ga...
OpenAI
Referenced in context of ChatGPT's shift from being prompted by humans to prompting humans, exemplifying AI's emergin...
RAND Corporation
Defense think tank involved in nuclear game theory research since 1950s and now researching military/strategic implic...
People
Tristan Harris
Co-host and co-founder of Center for Humane Technology; leads discussion on game theory colonization of technology an...
Aza Raskin
Co-host and co-founder of Center for Humane Technology; discusses AI manipulation of language and self-other overlap ...
Sonia Amadai
Professor of political science at University of Helsinki and director of Center for Existential Risk at Cambridge; au...
John von Neumann
Mathematician and physicist who invented game theory in 1940s to formalize parlor games; later applied to nuclear wea...
Oscar Morgenstern
Co-author with von Neumann of 'Theory of Games and Economic Behavior,' foundational game theory text
Abraham Maslow
Psychologist whose hierarchy of needs framework is used to illustrate non-scarce, positive-sum human values
Marshall Rosenberg
Inventor of nonviolent communication; cited as example of discovering alternative communication moves beyond game the...
Mahatma Gandhi
Historical example of solidarity-based movement that broke out of game theory logic through nonviolent resistance
Ronald Reagan
U.S. President whose nuclear strategy was reportedly changed by 'The Day After' film, demonstrating power of existent...
David Sloan Wilson
Sociobiologist who worked with E.O. Wilson on mutual aid theory; cited for insight that altruistic groups outcompete ...
E.O. Wilson
Sociobiologist whose work on cooperation and mutual aid provides alternative to selfish gene interpretation of evolution
Luis Alvarez
1968 Nobel Prize physicist quoted as critic of game theorists as 'very bright guys, no common sense'
Albert Einstein
Referenced as representing higher level of consciousness needed to escape pure mathematical logic of game theory
Quotes
"If I don't do it, they will. You know, if I don't race for that attention and hijack people's psychological vulnerabilities to build social media doom-scrolling machines, then I'm just going to lose to the other company that will."
Tristan Harris•Early in episode
"Game theory misses fundamental aspects of what it means to be human."
Tristan Harris•Introduction of Sonia Amadai
"If you're in the world of scarce goods, everything is a prisoner's dilemma, and you really, it is non-navigable. But the way out of that, and I think it's so simple, is that you just ask yourself the question, if the other guy went ahead and cooperated ahead of me, do I cooperate or not?"
Sonia Amadai•Mid-episode
"Selfish individuals do outcompete altruistic individuals. But groups of altruistic people outcompete groups of selfish people. And everything else is commentary."
Tristan Harris•Late in episode
"With AI, game theory becomes destiny. And that destiny is a thing nobody wants. That also has neg of infinity. And so if we can all see that and see it clearly, that means cooperation does become the rational thing to do."
Tristan Harris•Conclusion
Full Transcript
Hey everyone, it's Tristan Harris. And I'm Aza Raskin. Welcome everyone to Your Undivided Attention. So Tristan, today I think is actually one of our favorite episodes because we're diving really deep into a way of seeing the world that feels very obvious, that feels sort of like you're naive if you don't adopt it, but that is causing the deadening of our world. And that is game theory. Yeah, I mean, and the simple way to boil that down is the logic that you've heard of this podcast before around AI and social media. Well, if I don't do it, they will. You know, if I don't race for that attention and hijack people's psychological vulnerabilities to build social media doom-scrolling machines, then I'm just going to lose to the other company that will. If I'm a movie studio and I don't release Spider-Man 7 while the other guy is releasing Batman 10, I'm just going to lose the game of building successful movies. If I don't build the advanced AI as fast as possible and take all the shortcuts, even though taking shortcuts is bad for humanity, well, then I'll just lose and they'll win. And cooperation, therefore, is for suckers. And this logic, you know, feels inescapable. It feels like it's a fundamental law of human nature. But this episode with our guest Sonia Amadai is about why it's not actually a fundamental law. It's a specific way of looking at the world, a way of looking that was invented by humans. We sort of call this the game theory dilemma, which is to say that if I adopt game theory and you don't, you lose. So game theory was actually invented in the 1940s by one of the greatest mathematicians and physicists of all time, John von Neumann. And he was trying to understand how do you formalize how you win parlor games like chess and poker. And this ended up getting used all the way up to our most existential threats, like the nuclear bomb, how it gets deployed. But there's something very interesting that happened, which is to treat all of human endeavors like a chess or poker game that is winnable. And so there's been this propagation of games, winnable games to be the fundamental substructure of everything from war to AI. So our guest today, Sonia Amadei, argues that it doesn't have to be this way. that game theory misses fundamental aspects of what it means to be human. She's a professor of political science at the University of Helsinki. She's also the director at the Center for Existential Risk at the University of Cambridge. She's the author of a book on exactly this topic, The Prisoners of Reason, Game Theory and the Neoliberal Economy. Professor Amadei, welcome to your invited attention. I'm delighted to be here. Thank you for the invitation. Just to sort of lay out the problem, it's that if I use game theory and you don't, I will outcompete you because I'm acting strategically wisely. So if you don't know game theory, then you're the sucker. So that sucks everyone into using game theory. But that changes who we are. you're changing the basis of trust you're changing the kind of society that gets created and we don't want to live in the society that is purely ruled by game theory and that's sort of like the game theory dilemma if you will the dilemma of game theory itself so the reason that Aza and I were so interested in doing this episode is if you look around the world the world kind of feels like it's being colonized by this cold strategic logic let's just give a few examples of like where this is showing up across a few different domains It struck me in doing research for this episode that game theory can colonize dating. So pickup artistry is like a game theory version of dating where people are making a cold calculus of, I'm going to say and speak the thing that will get me the outcome that I want. And I can measure that if I do this action versus this action, it will lead to this result. If I'm designing software, like, you know, I should be designing software like Aza's dad who started the Macintosh project, thinking about what's good for people. How do I make this really usable? What's going to lead to these really positive outcomes for society? But then I noticed that there's these other guys that are making software in a race to hijack human attention, which means they're racing to hijack human vulnerabilities, which means that they're actually measuring using A-B testing. If I design it this way versus this way, I'll actually get more results. I'll get more engagement. I'll get more screen time. I'll get more people scrolling for longer. They'll come back more often. If I make the button red instead of blue, or if I use a notification, or if I highlight Their best friend or their girl that they've been spying on, you know, actually liked their post. And because they're in this logic of measurement, game theory colonized software design. Or then mimetics and culture and political campaigns where you have a politician who maybe wants to say something authentic and true for them and meaningful and heartfelt and sincere. But then they're told by their advisors, no, you can't say that. We measured the results of these different communications and you should say it this way versus that way. And what it leads to is this kind of deadening of culture, this deadening of dating. this deadening of relationships, this deadening of software design. And then now you get to AI, where AI is here. And instead of designing AI in a way where we focus on designing cures for cancer for all of us who have loved ones with cancer right now, and really focusing on that so we can actually get the benefits of that direct outcome that supposedly this is all for. We're seeing companies in a race to scale these crazy, super, you know, uncontrollable, inscrutable, powerful intelligences under the maximum incentives to cut corners on safety. And so in every way, game theory has colonized not just technology and software, but like more and more of our total world. And I want people to get this because I think it helps explain almost there's a good news to it, which is what you see out there in the world when it feels dead or meaningless or cold or strategic. That's not authenticity. That's actually just a world that has been colonized by game theory. And so what I want to get for this episode is how do we help expose, how did this logic really take over? So I think, can we tease that out a little bit just so that people can get a little bit of a flavor of why this is so critical? The most basic point would be to look at the original text, which was John von Neumann and Oscar Morgenstern's Theory of Games and Economic Behavior. The expected utility theory was part of this technological decision theoretic breakthrough that allowed social scientists that were using that approach to claim that anything that has any value at all can be captured by expected utility theory. Von Neumann thought that all value could actually be monetized, which you could argue about, but that's the way he thought about it. He thought that you could put a monetary value on anything by watching people's behavior, You're seeing what they're willing to pay to have a certain outcome. Basically, he had that idea. You could put a monetary value on everything that would motivate people, that would incentivize people. And expected utility theory let you do that. It's probably important to let people know a little bit about von Neumann. Yeah, who is John von Neumann? He seems like such a pivotal figure. John von Neumann is, well, first he was operating in quantum thermodynamics. So he axiomatized quantum theory. So he's a mathematical prodigy and genius. He immigrates to the United States prior to the Second World War because it wasn't safe. He had the Jewish ancestry. So he moves to the United States and he takes up at Princeton, which then was this location from where he ended up playing a pivotal role in the Manhattan Project, which is in building the atomic bomb that was then used in Hiroshima and Nagasaki. During the Second World War, he actually chose the targets of Hiroshima and Nagasaki. He was on the committee that made those decisions. So just to quickly tie, let's see if I'm getting this history right. Von Neumann is trying to understand how to win at games of chess and poker. He's trying to formalize these sort of parlor games. And to do that, he has to make an assumption about human nature and an assumption about the game being played, which is that you have to win. There is no such thing as cooperation in chess. Then that model that he creates gets picked up and used because he's part of the Manhattan Project to model the quote-unquote game between all the great powers. And so now this very dimensionally reduced model of what humans are, ones where we don't cooperate, is now the basis for the most important decisions the world is making. We've applied a theory of parlor games to nuclear weapons. Yeah. Yeah, exactly. And that's how you end up with a world where thousands and thousands of nuclear weapons are built on both sides, enough to destroy the entire world. And that is what keeps the world safe, even though it's safe under the, you know, just hair trigger, hairline sort of level of fragility, where just one little false step could still end the world. And yet that was the, quote, rational thing for us to do. But if you try to escape that logic, Like you say, well, we shouldn't build nuclear weapons. And you come in as a peace activist. And you say, we should just dismantle all nuclear weapons. Well, how do you stop the other guy from doing that? And you end up with game theory feels inescapable. If I don't do it, I just will lose to the other one that will. Yeah, and what you see a lot today in the way that game theory in the Prisoner of Sulem is projected in these arms race over AI is asymmetric power. So the UK security strategy for 2025 is all about asymmetric advantage. And that is a real change of worldview from a classic liberal multilateral world where we would be hoping for mutual benefit. And game theory would lead you to conclude there's no other way to come to this solution, quote unquote, of this situation. It's non-negotiable, non-navigable. If I'm the guy that is going to be cooperating, people will trample me. I will not survive and propagate. You're seeing game theories, it's in public policy, it's in economics, it's in political science, it's in nuclear deterrence, it's in biology, evolutionary game theory. And the idea in game theory is that you would only ever say something strategically. And when you are a game theoretic actor, every time that you say anything, it is only what you need to say to get a specific outcome. So it's deeply embedded in the architecture of our world. So a moment ago, you heard Sonia refer to the prisoner's dilemma. This is a classic game theory problem showing why two rational individuals might not cooperate, even when it seems beneficial, and that leads to a worse outcome for both. It's called the prisoner's dilemma because it imagines a scenario where there's two prisoners from a crime, and they're being interrogated separately, and each one has to decide, do I stay silent or do I betray the other? If they both stay silent and say that they didn do it then they both get light sentences But each is tempted to betray the other and say that the other one did it And that way they can go free If they both give in to that temptation then they both end up with the harsher sentences than if they had just cooperated. In my book, Prisoners of Reason, one of the things I really struggled with is how do you present the prisoner's dilemma in such a critical way that when people finish reading the book, they would question the logic of the prisoner's dilemma. And the whole book is written under that attempt to unlearn it from people, even though it's teaching the Prisoner's Lom at the same time. So people become critical consumers of game theory. And it's very, very, very difficult to do that. And then there's this anomaly about, well, why is it that actual humans don't necessarily follow the logic of game theory? And especially those that are untutored in game theory, the ones that haven't been exposed to this logic or taught it methodically in classes, they end up being the ones that would probably be more cooperative. I work in Finland at the University of Helsinki, and I think it's actually a crime of some kind to teach the prisoners limit because the students just cooperate there. They can't fathom. And if I've done these, not experiments, but simulations, and often it's the foreign students that would be more prone to be in a scenario where they would try to take advantage. And for the Finnish students, the logic doesn't make any sense because Finland is a very high-trust society, and it doesn't run according to this logic of either game theory or the prisoner's dilemma. Not at the moment, anyway. And is the reason that it would be a crime, or you feel like it's a crime to teach to the Finland students, is it because once they learn it, it even starts to shift some of their thinking and behavior? Yeah. Finnish kids, students, they are naturally more cooperative, creating a more trusting society. and to introduce game theory to them interpersonally means you're changing the basis of trust or changing the kind of society that gets created. And we don't want to live in the society that is purely ruled by game theory. Strategic rationality. Exactly. And that's sort of like the game theory dilemma, if you will. Once you see it that way, it's almost its own mimetic kind of infection. it actually infects everyone else's thinking. And the more people think in terms of that way, the more people are actually operating from a calculated place, the more people's speech is calculated, the more they start to out-compete others, and the more that group starts to out-compete everybody else who's not operating with game theory. So it has this kind of dominating, totalizing, you can see it like a global virus, like coronavirus, but it's a game theory virus sort of colonizing the world and bringing more people into that mode of reasoning. So theoretically, if actors can actually find some authentic, trustworthy place, like there's jokes about, what was it, Esalen was doing hot tub diplomacy where you had some of the Soviet nuclear scientists with the American, I don't know if they were nuclear folks, but I know there are people that were involved and there's these jokes about hot tub diplomacy. You got to get people in a hot tub just like actually talking to each other as raw human beings, reckoning with what's actually at stake. But to do that, you need this communication. You need authentic communication. You are a trustworthy actor who's communicating with me honestly about what you actually feel. And I'm a trustworthy actor who's receiving your communication and communicating honestly in return. And in a way, the whole problem is trustworthiness. trustworthiness. So when people start to shift from communication that's honest to communication that's calculating, where the word communication is almost a false idea, we're actually signaling to each other. So I'm speaking tokens at your brain that I'm calculating, and you know that I'm speaking tokens at your brain. So then you counter respond with tokens at my brain. You see how game theory starts to kind of make the whole world feel inauthentic, make the whole world feel calculating. And if we don't do something about it, we end up in this bad outcome. And that's what nations do, right? North Korea sends a calculated statement where they use exactly these words, but not these words, because they're trying to escalate in kind of this tiered signaling regime. But you're just saying, you're bringing up so many important points about the way that communication is so fundamental, but then also the way that communication itself doesn't get to be a useful tool in game theory because it becomes itself colonized by game theory. And just to build on that a little bit, the game theory dilemma is that if we can all see that the world, that everyone operating on game theory, and then AI, which perfectly operates on game theory, that world that that creates either is non-existent or nobody wants to live in. And it's by seeing that that's a world nobody wants to live in that we create the opportunity for choosing something much more human. And just to sort of double underline why AI is so central to this conversation, and we said this in the AI Dilemma talk we gave several years ago, is that AI arms every other arms race. If there's a military arms race, AI arms and supercharges the military arms race. If there's a corporate arms race, if there's an A-B testing, memetic, you know, political communication arms race, AI will arm that arms race too. And so the reason that we have to reckon with game theory itself is because AI is like the maximization of game theory logic, which is its own kind of singularity of just catastrophe. And so AI is almost like a gift to actually look at the inadequate framework of game theory because it's already been inadequate, but we kind of keep pushing the can down the road. But now, because it's sort of making every problem that comes from game theory so visible, we have to reckon with it itself. So in the search for solutions about how we escape game theory, it's really important for us to look at what are the assumptions that game theory makes about human nature so we can start finding where there are cracks. So can you outline what are the assumptions that game theory makes about human nature? So according to game theory, value has to be scarce. And since game theory says that everything valuable can be accounted for in its metric accounting system of what is valuable, then everything that humans would value would need to be scarce. But if you look at, for example, my favorite, the Maslow Pyramid, where you look at all the different levels of what has value, And if you look at esteem, self-confidence, all of the higher levels of the Maslow pyramid are usually, they're positive some aspects that it doesn't, if someone gets a good night's sleep, for example, that usually doesn't take away from somebody else getting a good night's sleep. Or if somebody feels self-esteem, that shouldn't detract from somebody else. So right away, we're in a world where all of the things that we can put a valuation on are scarce and we're going to be competing over them. And actual relationships, friendship, love, family, having children, most of what we value, I would argue, is actually these positive sum goods that you're never going to even begin to enter into some kind of a game theory payoff, right? What's the payoff? Just for listeners, this is Maslow's hierarchy of needs. It's a framework that Abraham Maslow came up with for what are the different hierarchies of human needs, starting at the base foundational level of, you know, shelter and sleep and biophysical needs, going up to these more abstract needs of self-esteem and then eventually self-actualization, love, belonging, community. And your point is that those things are not zero sum. If I have, you know, esteem, this is why, you know, corporations and organizations are always about, you know, doing appreciation days. And we really appreciated this employee who did this and this and this. These are ways of doling out more of a fulfilling society that's not zero-sum. And there's also, hearing in there, the assumption that only things that can be measured matter, because only then can you reason on them. So how do you put a number on love or on friendship? And so then game theory just doesn't have anything to say about it, so it doesn't model it. No, it's worse. It will do a Sophie's Choice move and say, no, but you will save one child before the other if there's a fire. And that's the horrible thing about the way game theory does valuation of what's important to people. We'll say, no, it can always, that's what von Neumann would say. No, you can always put someone in a situation where they'll need to choose. And when they're making that choice, then you can do that preference architecture of mapping what people's desires are and maybe now their intentions. So it's very insidious because it lifts us out and it constructs a world. if you're creating institutions according to this logic, you're constantly putting people in situations where they will feel like it's non-navigable to start perceiving and acting in a world according to that fundamental assumption that anything that's valuable is scarce and competitive. It's very frightening. It's like a nightmare. It's just like putting ourselves in a nightmare world and then saying, oh, but you'll never wake up from this nightmare. I think it's important to note that in a world that has sort of been colonized by so much, by game theory, and what is effective and what is just Machiavellian. And that world selects for psychopaths and Machiavellianism the dark triad characteristics, basically. So dark triad being the narcissism, Machiavellianism, and psychopathy. So the inability to empathize with others. Because the better you are at not empathizing with others, the more you can act just cold rationally, the better you'll do at those kinds of cold games. The more Machiavellian and strategic your mind is, and you can just reason that way, the better you'll do at these games. and the more sort of narcissistic and kind of self-important you are, the better you'll do at these kind of games. And so when you look out there in the world and you say the world looks like it's run by psychopaths, well, that's because the system being run more by game theory selected for those who would actually be complicit and not have a problem with playing that perverse game. And so it takes people that might even start Compassionate Warm, et cetera, in their lives and the ones who continue to play the game and don't burn out and don't want to keep doing it, the ones who don't want to do it, they burn out, they do something else. The ones who do want to keep doing it are the ones who are capable of becoming sort of those dark triad folks. And I want people to know that that doesn't mean that actually that's the vast majority of people. It's actually a small set of people who've been selected for and put in the top positions of power. So you were getting through the assumptions, and you just gave us the first one of game theory. The assumptions. The other is this essentialism. This is not an invention. This is a discovery. This idea that we evolved to be these machines that have to propagate. And the way that you would do that is to be the perfect strategic actor. So it's an essentializing of this rationality. And then that reinforces that there's really no alternative. Like those of us who might want to be a different way, we will get suckered. We're going to fall by the wayside. All of those bad things. And then the other assumption that we are programmed to be this way means there is no alternative, that you cannot but be an individual competitor, a strategic competitor, or you will pay the price for that. Let's see if I'm getting it right. So it's like the core assumptions, essentialism, that we're programmed to be strategic competitors, that if you're rational, then you do X becomes proscriptive not just descriptive You have scarcity Only scarce things have value hence competition is inevitable And then the last one is that there no alternative The strategic competition is non-negotiable. If you don't play the game, you lose. If you opt out, you lose. And so if we dive into these core assumptions now, so if these are the assumptions that undergird, that game theory locks in, this is the only one way to see the world, how would we explore these assumptions or see if they're limited one by one? Well, the first one is easy, the value, because I'm not sure about everyone, but many people probably do feel that there are aspects of their lived experience, if you're spending time with a loved one, or if you're feeling that this person is in some kind of pain and you have that empathy, I think most of us experience the higher levels of the Maslow Pyramid and know that those are not zero-sum goods. they they're inherently positive some where if one person has self-esteem it doesn't take away from another person's self-esteem not if you're in the advanced top of the maslow pyramid maybe for a narcissist if someone else has self-esteem you'd want to destroy it but but not for mature adults that have evolved to the top of the pyramid so that one i think is um is pretty easy to grasp And then it's just a question, but how do we bring that love, empathy, and positive some goods into our world? So that would be the next question. So I have spent a long time thinking about that. And I think it starts with understanding this logic of the prisoner's dilemma, because if you're in the world of scarce goods, everything is a prisoner's dilemma, and you really, it is non-navigable. But the way out of that, and I think it's so simple, is that you just ask yourself the question, if the other guy went ahead and cooperated ahead of me, do I cooperate or not? Do you believe my signaling that I was trustworthy? But if I'm actually not a game theoretic strategic rational actor, I will cooperate if the other guy does. And then what you're trying to build is assurance and trust based on the fact that I am trustworthy. And we all know if we're trustworthy and the trustworthiness just comes down to do I cooperate if the other person does? And then you've broken out of the prisoner's dilemma and you're starting to think about value in ways where value, it expands into two major concepts. One is solidarity, where you feel that solidarity with a common cause, with other people, and you'll fight for a cause. And we know, look at Tiananmen Square in China. Look at that video that lives on in all of our minds, that the man's standing in front of the tank. Why? Why did he do that? That was not strategically rational. but the people that were protesting over and over and again in history like in the Gandhi peace movement they had the solidarity which meant that they had this way of connecting and working together that was very powerful they stepped outside the logic of all this was inevitable there's nothing that we can do and they did something that broke out of it and they were trustworthy and they somehow the actions that they did tapped into something in the collective consciousness that broke through and popped out of some of the containers somehow Yeah. And a lot of working game theory has been to say that is irrational, that if you are able to work with solidarity, that that's evil, that it's communist, that it can only happen if there's some kind of a dictator that's incentivizing people and controlling them, that it's not natural for people to have solidarity in terms of some kind of a connection and a common cause. And the other thing is commitment. And commitment basically means that if you promise something, you go through with it. And Finland, for example, is such a high-trust society that if you give your word on something, then that is who you are. Stepping entirely out of the world of game theory and saying, I will carry through on my promise no matter what. I mean, so banal, right? Keeping one's word. How did we lose that? Is fundamental the civil society or that that would be a choice? How did we lose the idea that that's just a fundamental choice for being a moral agent in a political economy? That's just baffling. We have to combat that by, it's very subtle and simple, but we have to believe what we say. And believing what we say, it sounds so trivial, but it's actually pretty difficult. because how many times do you just say whatever it takes just to get some outcome versus believing what we're actually saying? And that's a basic duty for being a citizen in society is stating what we believe and then trying to make our statements to be true. So those are three pretty basic antidotes that we're all able to put into action. So let's just talk about how this all connects to the AI arms race. RAND, the same nonprofit defense think tank that has been involved in research in, you know, nuclear game theory and deterrence, etc., has also been doing research on the military and strategic implications of AI since the 1950s. And AI was framed exactly like nukes, existential technology that's requiring strategic dominance, where fear drives the race, game theory legitimizes the fear. If anything, game theory got even more powerful inside of the reasoning about AI because AI is unique in the fact that it can create step functions in my knowledge of physics or step function in my knowledge of math or step functions in my knowledge of energy production. And those step functions in any of those scientific domains could create a step function in military domains or a step function in industrial domains where if suddenly you can produce energy in order of magnitude more cheaply than me or produce all goods in order of magnitude more cheaply than me or produce suddenly an infinite supply of weapons in a way that I don't have, because AI is a race to arm every other arms race and a race to these step functions, it actually favors this kind of race to an asymmetric advantage, which then becomes the policy, which then becomes the kind of, we shouldn't do anything to regulate or set guardrails on this at all. And it's why you have currently in the United States a proposal for a federal preemption on AI, meaning we don't want any states to regulate AI. We're going to stop and actively prohibit regulations at the state level because we need a no holds barred, you know, race to asymmetric advantages on every sector. Yeah. And then the AI is programmed to be a strategic rational actor because rationality is this thing that is game theory. When you put those two together, that we interpret that there has to be this AI arms race. The U.S. wants total strategic dominance in AI for that exact reason, that it's going to give the advantage where there's no coming back. Once the U.S. dominates in AI, it's escalatory in the sense that the AI will keep feeding back that logic for being rational. And then the human makers of policy will say, but we need to say symmetric advantage. And that's like the ultimate winning of this paradigm, the paradigm one. and then it is harder because you and I can take those easy steps of knowing there's more value than scarce value. We can be trustworthy. We can believe what we say and we can cooperate with others and form groups. But how do we break that out of the highest-flung policy environment, especially when you see that the people that are in that environment have been trained for years in this way of thinking? So how do you redo this, especially since the AI is going to be amplifying that set of beliefs. That's, I think, where we are right now. And I think that's quite a predicament. This reminds me also of an example that I think we might have mentioned on this podcast before of how do you break out of this trap. It's not fully true, but if in the world of, you know, relationship vicious spirals, two people are in a relationship and they're in a vicious spiral, or one starts criticizing the other, the only way that the other knows how to respond is, well, you criticize me. So tit for tat, I'm going to criticize you. Well, did you know that you left the dishes out or you did this bad thing or and then you end up in a downward spiral where both parties actually don't feel good at the end of the day and they're left with kind of a collective relationship commons between them that is degraded from the fact that they both openly criticized each other and if you're operating in that paradigm it might seem like well that's the only thing that could have happened like clearly that person criticized me that's the only route that we could have gone from there and then you have Marshall Rosenberg come along the inventor of nonviolent communication who says, you know, actually it might appear that way, but it turns out there's this other communication. I don't want to call it a strategy because that makes it like calculated and game theoretic, but you basically respond with what it felt like to receive that or hear that. When you said this, I noticed I felt that. And you just start with that because I'm sharing what the effect of what you just said was and what it did to me, but in sharing what I feel because of it, now the other person's empathizing with the impact of their actions. So it's creating connection at a higher dimension than the sort of value metric of who's winning the war of that communication exercise. And in a certain way, you can think of that as a kind of creative move that up until Marshall Rosenberg, maybe people had that in some other languages and other tribes, you know, throughout history, but Marshall Rosenberg kind of put a new move onto the menu of human relationship communication dynamics. And Aza, you've talked about how just like there was Move 37 in the game AlphaGo, so when the AI that Google DeepMind built that played Go and beat the Go player, it came up with a new move that no human had ever done called Move 37. And if you had AIs that are simulating, you know, the way that this could go and actually can discover Move 37s that are positive some, that look for cooperative dynamics, that everyone was convinced there's no other move. There's definitely no other better way to do this. And I think whether it's Move 37 for relationships or for treaties, you know, you've talked about this for treaties. What would Move 37 for treaties look like? Alpha treaty. And maybe there are ways that AI can both be a tool in searching for positive-sum games in a world that looks like we're locked in zero-sum games. And that brings into my mind just on the, I think both of our favorite work in AI alignment, which is about self-other overlap. Because a lot of what you're saying here in nonviolent communication is that you are internalizing the effect of your words on someone else. It becomes part of you. There's mirror neurons. And in self-other overlap, this research is very interesting. They train an AI not to be able to distinguish the difference between I and you, self and other. So that the sentence like you stole because your family needed food and I stole because my family needed food, they become sort of the same because I is equal to you. I think it's really interesting that the AI has been programmed to use the personal pronoun I when we can wonder if it has that embodiment of it being a human communicator. And actually some of my colleagues, well, I put out that maybe if we'd never let AI use a personal pronoun, then at least we could have disambiguated it if that had been just hard and fast regulation. And my two colleagues thought that that actually would have helped us not be where we are But if we are trying to solve the alignment problem and we don really care if the AI refers to itself as I or not then it does seem that it might be possible to program it to not have that barrier or distinction. But that would be a bit of an experiment. And it's been tried. But if we're going to solve the alignment with that and we just cast it loose, that would be interesting to see what happens writ large. But I still think there's worries about language changing and whether language is a strategic signaling game and how would language function between I and you if we dissolve that barrier, but if language is still strategic, because I think we'd want to not look at language or treat language or experience language as a means of control. And I think this is so important with AI because up until recently, ChatGPT, when it launched, we prompt AI. But what's changing in 2025 and certainly in 2026 is that AI prompts us. And so AI is A-B testing. We've had politicians and marketers trying to figure out what is the most effective language. And they have a small surface area over our lives. But AI is increasingly in relationship with major portions of the population. I think, what is it, like one in eight human adults are now in some kind of communication relationship with AI. And so AI can search through all signal language space to find the most effective ways to manipulate us. And that is the kind of threat that humanity has never had to deal with. Yeah, and you had that sentence that was in your video that was the main one that you have on the website. When you talk about that now language is the fundamental unifier under all of these different domains that AI would have been unleashed on. And that now, because language is how we socially construct the world, that we're letting AI take control of this profound tool of this social common world construction with whatever logic is programmed into how it uses language. And that it does have the ability to just totally dissolve our social reality if we don't find a way to control it. I thought that was probably the most profound of many profound moments in your conversation for the AI dilemma. The real main thing we've been exploring here is whether in kind of AI creating the zenithification of the game theory logic, is there a way out of that? And I'm kind of curious about the ability to kind of have this be kind of a jubilee, a break, the kind of maximization of game theory leading to this desire to change game theory, to wake up from a kind of a single cellular narrow self-interested logic that dominates the world into this kind of multicellular collaborative logic in which we can perceive the fear of all of us losing greater than we can fear and feel. world where I lose to you. But in order for that to be true, the way in which all of us lose has to be extraordinarily clear and trustworthily communicated and received by every agent who is in charge of making decisions about the way this goes. I have three thoughts. One, we have a lot of freedom of choice, and that starts with being trustworthy. And that starts with if the other guy cooperates, I will. If the other guy doesn't cooperate, I'm not going to cooperate. But if the other guy cooperates, I will. So there is freedom of choice that we have fundamentally as agents. Then I was thinking about the nuclear movie, The Day After. The movie Sonia is referring to here is called The Day After. It's a 1983 movie that depicts the brutal aftermath of a full-scale nuclear conflict between the U.S. and Russia. It was seen by millions of Americans. In fact, it was the most watched television event in history and was screened for President Reagan and the Joint Chiefs of Staff. Reagan later said that the film actually changed his mind on U.S. nuclear strategy and it encouraged him to pursue de-escalation with the Soviet Union. Maybe the point there is to create a Hollywood blockbuster that would be that for this moment, that would build up from, we can undermine those assumptions and we can have that individual freedom outside of the AI world to have that sort of wake-up moment. And then the third thing would be, I don't know about the major programming parties that are at the AI companies. You guys are probably way more in touch with those people. But there is no reason that we would need to be stuck with this orthodox strategic form of rationality. I don't know if the deep mind scientists, if their approach is radical enough. But yeah, why are we stuck with a prisoner's dilemma, prisoner of reason type of approach to strategic rationality? Wouldn't it be possible to centralize a different kind? I think that if people could be, I don't like the word educated, but if there could be some kind of participatory environment where leaders are exposed to alternative ways of thinking, that would be carefully thought through the way that you guys generate content. But those three things together, making people feel that they can opt out at an individual level and they have the tools, even though knowing where it is hard to opt out, something that's a collective kind of imaginary event that captures this moment. And then to just go back to the foundations and realize we have so many alternatives and there's so much goodwill and there's so many alternative realities and constructions of where we could be to draw from. So I guess I love this conversation, optimistic with at least thinking those three things and some others take us in a better direction. One of the things just to summarize that I think the day after did was that made the cost of defection negative infinity. It became existential. So now cooperation becomes the rational thing to do. And I think the point of this conversation is to say with AI, game theory becomes destiny. And that destiny is a thing nobody wants. That also has neg of infinity. And so if we can all see that and see it clearly, that means cooperation does become the rational thing. Yeah. Clarity, we say in our work, creates agency. And if we have clarity about the current destination being an outcome that no one wants, we can choose something else. And, you know, it's a difficult picture. It is probably the hardest problem that humanity has ever faced, certainly the hardest coordination problem that we've ever faced. And yet, this whole conversation I'm reminded of a quote I was just pointed to recently by Luis Alvarez, who was the winner of the 1968 Nobel Prize in Physics. Perhaps the greatest experimental physicist of the century remarked that the advocates of these sort of game theoretic schemes were, quote, very bright guys, no common sense. There's this kind of over-intellectualization of that highly intelligent people build elaborate abstract models. They trust their mathematical formalism too much, but they ignore obvious real world constraints, incentives, human behaviors, and deeper sort of truths of human nature, inside of which may lie the answer of snapping ourselves out of this cold mathematical logic. And so maybe we can, you know, since we're appealing to the high credibility gods here of inspiring figures of history, if Einstein is sort of just pointing us at what is the higher level of consciousness we need to be operating from to snap out of the lower level consciousness of just pure mathematical logic of game theory. Well said. I wanted to just call back. There were sort of two competing schools, is my understanding, that came post-Darwin to interpret Darwin. One is it's just brutal competition. And the other one was, well, this is about mutual aid and cooperation. I think Darwin was the first person to ask, where do the noble traits come from? Like altruism and heroism. Where does it come from? And we had an episode with David Sloan Wilson, who worked closely with the sociobiologist E.O. Wilson. And they have this wonderful phrase that sums it all up. It's why the selfish gene is sort of wrong. It misses this, which is selfish individuals do outcompete altruistic individuals. But groups of altruistic people outcompete groups of selfish people. And everything else is commentary. and game theory misses this kind of noble traits that comes from groups operating together. It's like, because noble traits are about giving something up for a greater whole. Yeah, it's a team reasoning. And team reasoning, you break entirely out of game theory. And really that's where we are on the planet now, right? I mean, if we don't figure out a way to cooperate rather quickly and if we don't find a way for not to be, like we've already been colonized by institutions operating about the game theoretic logic. But once the AI is building those institutions and changing language and changing what's normal to ever and ever more higher bars of strategic competition, if we don't find a way to derail from that, it's going to be pretty desperate. But knowing it's an option and we can be trustworthy and we can believe what we say and we can have value that's not scarce, Maybe just that's an inner light that just starts to create a possible different imagining. If we can start to believe that there would be an alternative possibility, then maybe that's the first step. With some very minimal building blocks, maybe we can start to create other social patterns and not to lose hope that we need to be these strategic cutthroat actors. Sonia, thank you so much for coming on your Undivided Attention. It's been, I really think, one of the most important, completely under-the-radar conversation that needs to happen. Yeah, absolutely. Thank you, Sonia, so much for coming on Your Undivided Attention. We're so grateful to have you. And your work with your book, The Prisoners of Reason, is just so illuminating to highlight this for everybody. So thank you so much for writing it and for coming on. I'm delighted. Really nice to meet you both. Your Undivided Attention is produced by the Center for Humane Technology, a nonprofit working to catalyze a humane future. Our senior producer is Julia Scott, Josh Lash is our researcher and producer, and our executive producer is Sasha Feigen, mixing on this episode by Jeff Sudeikin, original music by Ryan and Hayes Holliday, and a special thanks to the whole Center for Humane Technology team for making this podcast possible. You can find show notes, transcripts, and so much more at humanetech.com. And if you liked the podcast, We would be grateful if you could rate it on Apple Podcasts. It helps others find the show. And if you made it all the way here, thank you for your undivided attention.