329 | Steven Pinker on Rationality and Common Knowledge
77 min
•Sep 22, 20257 months agoSummary
Steven Pinker explores common knowledge—the concept that everyone knows something AND everyone knows that everyone knows it—and its critical role in human coordination, social relationships, and rational decision-making. The episode examines how common knowledge enables everything from driving conventions to financial bubbles, and why failures in common knowledge undermine cooperation and enable authoritarianism.
Insights
- Common knowledge is distinct from universal private knowledge; it requires recursive awareness (I know you know that I know that you know, etc.) and is essential for all social coordination
- Human rationality failures often stem from social factors rather than pure logic—people disagree partly due to different priors but significantly because they doubt others' rationality or honesty
- Public, conspicuous events are the most reliable generators of common knowledge, which is why autocrats suppress free press and why demonstrations are more powerful than opinion polls
- Social relationships are ratified by common knowledge; we often use euphemism and plausible deniability to avoid generating common knowledge that could threaten existing relationships
- Aumann's Agreement Theorem shows perfectly rational agents with shared priors cannot agree to disagree, yet people do—revealing either different priors or mutual doubts about rationality
Trends
Rise of pluralistic ignorance in digital age: people vastly overestimate how many others share fringe beliefs, creating false consensus spiralsCommon knowledge as coordination mechanism in social movements: digital platforms enable rapid common knowledge generation, prompting authoritarian crackdowns on internet freedomBayesian reasoning adoption in rationality communities as antidote to over-updating on single data points and combating replicability crisis in scienceRecognition that financial bubbles, bank runs, and market crashes are common expectation phenomena rather than rational responses to fundamentalsShift toward adversarial collaboration and steel-manning in academic discourse as alternative to winner-take-all debate framingUnderstanding embarrassment and self-conscious emotions as driven by common knowledge rather than mere detection of social transgressionsReframing cooperation vs. mutualism: recognizing many human interactions are mutually beneficial rather than altruistic, reducing need for reputation-based enforcement
Topics
Common Knowledge in Game Theory and EconomicsAumann's Agreement Theorem and Rational DisagreementBayesian Reasoning and Epistemic HumilityFinancial Bubbles and Common ExpectationCoordination Problems and Social ConventionsLanguage, Euphemism, and Plausible DeniabilityNonverbal Communication as Common Knowledge GeneratorSelf-Conscious Emotions and Social EmbarrassmentPublic Demonstrations and Resistance CoordinationAutocracy, Censorship, and Information ControlRecursive Mentalizing and Theory of MindAcademic Freedom and Convergence on TruthReplicability Crisis in Science JournalismPoker, Game Theory, and Imperfect InformationPluralistic Ignorance and Conspiracy Theories
Companies
Facebook
Mentioned as catalyst for Arab Spring uprisings through social media coordination of public demonstrations
Google
Cited alongside Facebook as enabling common knowledge generation during Arab Spring before authoritarian control
Silicon Valley Bank
Referenced as recent example of bank run driven by common expectation and fear of others withdrawing deposits
People
Steven Pinker
Harvard cognitive psychologist and author discussing common knowledge, rationality, and human coordination mechanisms
Sean Carroll
Host of Mindscape podcast conducting interview with Pinker on common knowledge and rational decision-making
Robert Aumann
Israeli mathematician who proved Aumann's Agreement Theorem showing rational agents cannot agree to disagree
John Maynard Keynes
Economist cited for beauty contest thought experiment explaining speculative bubbles and financial irrationality
Richard Dawkins
Biologist referenced for selfish gene theory in discussion of cooperation and altruism evolution
Cass Sunstein
Harvard colleague of Pinker; discussed in context of clarifying abstract concepts like liberalism
Tyler Cowen
Economist who studied contrast between ideal and real argumentation in relation to Aumann's theorem
Robin Hanson
Co-author with Cowen on why people don't behave as rational agents in real argumentation
George Lakoff
Linguist who studied metaphors in language, including 'argument is war' framing discussed by Pinker
Mark Johnson
Philosopher and co-author with Lakoff on metaphorical thinking in language and argumentation
Michael Suk-Young Chwe
Author of 'Rational Ritual' exploring common knowledge in public demonstrations and coordination
Maria Konnikova
Former Harvard student of Pinker who became professional poker player and cognitive psychologist
Annie Duke
Cognitive psychologist from child language field who became professional poker player and strategist
John von Neumann
Mathematician who invented game theory to rationally address poker and imperfect information games
Irv DeVore
Late Harvard biological anthropologist cited for observation on eye contact duration and outcomes
Lindy West
Writer discussed for her essay on rejecting social pretense around obesity and common knowledge
Jared Diamond
Author of 'Guns, Germs and Steel' cited regarding Spanish conquest and coordination failures
Franklin D. Roosevelt
U.S. President who solved bank run problem through bank holidays and federal deposit insurance
John Ioannidis
Epidemiologist who published influential paper on why most published research findings are false
Yasha Mounk
Writer who critiqued firing of innocent people based on misinterpreted common knowledge signals
Quotes
"Common knowledge is necessary for conventions, driving on the right or driving on the left, respecting a leader or department chairman or an expert, respecting paper currency."
Steven Pinker
"If you're both perfectly rational and you start from the same beliefs, you should not be able to agree to disagree. You shouldn't even be able to just disagree and maintain the fiction that both you and the other person are perfectly rational."
Steven Pinker
"In the end you will leave because there is simply no way that 100,000 Englishmen can control 350 million Indians if the Indians refuse to cooperate."
Steven Pinker (quoting Gandhi)
"Without academic freedom, you can't converge on the truth because the process of publishing something, having other people attack it, try to falsify it—if you disable that process by canceling or punishing someone for what they believe, you're never going to find out what's true or false."
Steven Pinker
"What's the difference between an innuendo that everyone understands and blurting it out? The difference is generating common knowledge."
Steven Pinker
Full Transcript
Hello everyone and welcome to the Mindscape Podcast. I'm your host Sean Carroll. A few years ago, back when we sold at LA, we had a summer project, my wife Jennifer and I. We took boating lessons. Climbing on a power boat back in Marina del Rey and spend a few hours toodling around, learning to like park the boat, tie it to the dock, all these things. I've forgotten everything by now. I don't know any of the nautical terms anymore, but there was a moment when if there was a disaster on the boat, I could help you bring it back to shore and tie it up to the dock. One of the interesting things was of course what you do when you're out there on the water and there's another boat that is on a collision course with you, right? Typically you don't have direct communication with the other boat. You're not on the radio. You can't just say, hey, I'm going to do this. You need to have some rules about how to behave in such a way that the two boats don't hit each other. Okay? And there are such rules. If you're literally coming right on, then you're supposed to turn to the right. You're supposed to change speed and direction in a decisive way so that the other boat can read your implicit boat language, I guess. The point is it works very well, but the reason it works is not only because everyone, you know, the pilots of both boats know the same rules, but because they know that each other knows the rules, right? So if I'm supposed to veer my boat to the right, that works because both boaters know that they're going to veer to the right and they know the other one is going to veer to the right, so there's a coordination between them and everyone is perfectly safe. This is an example of what philosophers and game theorists call common knowledge. So common knowledge, as we'll talk about in the podcast, it's a slightly misleading term. It doesn't just mean knowledge that lots of people have. It means knowledge that lots of people have and they all know each other has. So there's sort of an infregress. I know that you know it and you know that I know you know it and I know that you know that I know you know it, etc., etc. and that's why philosophers love this kind of thing. It also leads to very interesting mathematical results that you can prove in the context of Bayesian reasoning and the presence of common knowledge. There's a very famous theorem called the Alman agreement theorem that says roughly speaking that if you have two perfectly rational agents with common shared priors about a set of claims that could be true or false in the world, prior probabilities, prior credences, and they have some different data. So they reach different posterior probabilities. They've updated their credences differently, but then they talk to each other and they just tell each other what their probabilities are after they got all this data and they know that each other are perfectly rational. Then they should instantly come to agreement. They should basically update their own priors on the basis of the fact that this other perfectly rational person has updated their priors and there's a right place to come to. Instantly is an exaggeration. There's no part of the theorem that says it has to be instant. There can be give and take. But you should not, according to Alman, if you're both perfectly rational and you start from the same beliefs, you should not be able to agree to disagree. You shouldn't even be able to just disagree and maintain the fiction that both you and the other person are perfectly rational. It seems that we do this all the time though, right? So it's an interesting question as to why people fall short of the assumptions of Alman's theorem. Anyway, this whole collection of ideas about common knowledge is the subject of the new book by today's podcast guest, Stephen Pinker, who presumably needs no introduction. As we talk at the very beginning of the podcast, part of his overall project of better understanding human behavior. You know, humans, bless their hearts, are not perfectly rational creatures. But what are exactly the ways in which they fail to be rational? And especially I'm becoming increasingly impressed with the importance of the social aspect of how we both really do think rationally, but also how we fall short of being rational. It's dealing with other people that a lot of our both pros and cons come into play as thinking cognitive creatures. So this is an exploration of one aspect of that. Let's go. Stephen Pinker, welcome to the Mindscape podcast. Thank you. You have a book coming out called When Everyone Knows That Everyone Knows about common knowledge. And it's a great topic, I think. But it's a little, I guess, not what I would have expected for your next book to be. So I get it now that I've looked at the book, why you did it. But let's start by putting it into the context of a bigger project. I mean, do you think of yourself as having a big project with all of your technical work and also your books? I do. I'm interested in human nature, what makes us tick and all of the implications of how we understand human nature. I'm trained as a cognitive psychologist. And so the subject matter is how people think. And so how people think about how people think about how people think is in some ways a natural extension. And it's also an extension that came about in particular through my work on my interest in language, where one of the basic facts about language, it's known in linguistics for many decades, is that even after we've worked out what all the rules of grammar are, what all the meanings of words are, and there could be an algorithm that could deduce the meaning of a sentence from the meaning of its parts and how they're arranged according to these grammatical algorithms. In practice, people don't mean what they say. They beat around the bush. They use euphemism. They use a U-endo. If you could pass the salt, that would be awesome. The meaning of that is not if you could pass the salt, that would be awesome. I mean, the meaning is give me the salt. Or, you know, nice story you got there would be a real shame if something happened to it. Do I come up and see my etchings? G-Office there, is there some way we could settle this ticket here without going to court and doing all that paperwork? We're counting on you to show leadership in our campaign for the future. You've probably heard that in fundraising dinners. So all of these examples, I mean, one of the reasons it took so long to have AI understand language is that if you simply give it the algorithms for figuring out who did what to whom based on the rules of grammar and the meanings of words, it will judge people's intentions. So if you say to a chatbot, can you tell me how to get to Harvard Square from here? You know, literally you would say, yes, I can tell you how to get to Harvard Square from here, but that's not what the user wants. The user wants it. Just give the answer. So all, anyway, the puzzle that I raised in a previous book, The Stuff of Thought, Language is a Window into Human Nature. It had a chapter on, I call the games people play. That is, all of the rituals that we go through to avoid saying exactly what we mean in so many words. The solution that I proposed there, and which I then built on in my own empirical research in cognitive and social psychology, and then in a chapter called Weasel Words in the new book. The idea is that what's the difference between an innuendo that everyone understands and a blurring it out. And I say the difference is generating common knowledge. That is, if he says to her, do you want to come up for coffee? And she says, no. You know, she's a grown woman. She knows what would you like to come up for coffee means. There's no plausible deniability of the intention. And, you know, he's a grown up. He knows what she just turned down. He knows that she knows that he knows. I mean, he could still think, well, maybe she thinks I'm dense. And she could think, well, maybe he thinks I'm naive. And so even though there's no plausible deniability of the message, there's plausible deniability of common knowledge of the message. And with the additional claim that our social relationships are ratified by common knowledge, that is, two people are friends and each one knows that the other one knows that they're friends. Two people are lovers. Two people are in a position of authority and deference. Two people are transaction partners. All of these everyday social relationships exist because each party knows that the other one knows that they exist. We often try to avoid common knowledge in order to preserve the relationship that we have. We don't threaten it, but we want to get the message across. Anyway, this is a long-winded answer to a simple question of why did you write this book? And the short answer is that my interest in communication and language led to me stumbling on this very rich concept of common knowledge. It had been explored by logicians, philosophers, economists, game theorists. There was a lot in there. And so it was worth the book, so I wrote the book. Well, and I think it's another example just for me as the physicist doing a podcast of there's a message that comes across over and over again. I think we've all been told in various ways that human beings are less rational than we'd like to think we are. We have biases and things like that. But what I'm impressed by is how many people are telling me the ways in which human beings reason and communicate and talk are so very, very social. They're not just things we would have invented if we were on a desert island all by ourselves. And common knowledge sounds like a very relevant example of that. Well, indeed, common knowledge, what I could argue, I do argue, is the reason that we can be social in the first place. Namely, common knowledge is necessary for conventions, driving on the right or driving on the left, respecting a leader or department chairman or an expert, respecting paper currency, which, you know, what's the value on a green piece of paper? The value is that I know that other people treat as having value, which they only know because they in turn think know that other people treat as having value. So all of these means of being social conventions, but also, as I mentioned before, informal social relationships and actions that we cooperate on that where we accomplish something collectively that we couldn't accomplish individually, they depend on common knowledge, on being on the same page. And that I suggest is why we evolved language, language probably co evolved with sociality. Language makes a lot of social coordination possible. Language depends on social coordination. You have to be in a cooperative relationship to exchange words in the first place. And this reminds me of, I recently did a podcast with your Harvard colleague, Cass Sunstein. He wrote a book on liberalism, and you have to spend the first five minutes explaining what you mean by the phrase on liberalism and what liberalism means. So we're battering around this idea called common knowledge, but it's not simultaneous knowledge. It's a little bit deeper than that. Yes, although simultaneous kind of announcement, revelation event is the quickest way to generate common knowledge. Something is, and it resolves the, something of the paradox that if you need common knowledge to coordinate to be on the same page, if common knowledge literally consists of the state where I know something, you know it, I know that you know it, you know that I know that you know it, which makes your head hurt. How do people have the common knowledge that they need to coordinate? The answer is that if something is public, conspicuous, self-evident out there, that can generate common knowledge in a stroke. Not always, but that's the surest way to do it. In general with words, every word is a convention. You know, Shakespeare said a rose by any other name would smell just as sweet, but we can use the word rose to convey the concept of a rose, because everyone follows that convention. We can count on it. When we learn the word rose, we don't then have to pull every person we meet as to whether they understand it the same way. That's just a tacit assumption that kids have to make in order to learn to speak and that we have to make in order to use language. Sometimes though, as in the case of what do you actually mean by liberalism? It's not foolproof, especially when it comes to either abstract, esoteric concepts, concepts whose common understanding may be relative to the community in which they communicate, cases in which the common understanding changes, language changes all the time, as no one decides, no one legislates the meaning of words. It's a kind of grassroots phenomenon where if people start interpreting a word or using a word differently, that is the meaning of the word. The meaning of the word is common knowledge of what it means. In this case, you didn't have common knowledge with Cass Sunstein, and so you had to stipulate it in so many words. You asked him, hey, what do you mean by liberalism? He said, my definition of liberalism is blah, blah, blah, blah. Sometimes we have to do that. That's not a typical way in which we use words. You'll be under surprise to learn that on social media, various people reacted to the podcast episode just on the basis of the title without actually listening to the definition of what the words were. Well, liberalism as we know has different meanings on different sides of the Atlantic and whether it's modified. He includes Ronald Reagan as a liberal in his definition, so you have to explain why that is. It's wrong to see people the wrong way. But I guess, I mean, maybe simultaneous was the wrong word. I'm just trying to highlight the definition so that we're super duper clear for the audience. It's not about everyone knowing something. It's about knowing that everyone knows something, etc. That's what the book is about, about that difference. That is, universal private knowledge is not the same as common knowledge, at least in this technical sense of common knowledge. Now, you and I right now, as with you and Cass, have to clarify what this specialized meaning of the word refers to, because when I use common knowledge, I didn't invent this usage. In fact, I don't even like this usage, but I'm stuck with it. In the technical sense in which philosophers, logicians, game theorists, economists use the term, it refers to the case where not everyone knows something, but everyone knows that everyone knows that everyone knows that everyone knows it. And so let's let's get into the psychological aspects or cognitive science aspects of this. This is, I guess, your home turf. How do we know that some knowledge that we have is common knowledge? I mean, both sort of informally and rigorously. Is it even possible to know all those levels of, I know that they know that they know that I know? Well, so I have a chapter in the book on that very topic called Reading the Mind of a Mindreader. And as I hinted at earlier in our conversation, most of the time, the common knowledge is is is granted by a conspicuous or self-evident event, something that happens within in a public place where you not only see it, but you see everyone else seeing it, you see it, or something has blurred it out within earshot of everyone else. That's the something is obvious, conspicuous. That's the typical root to common knowledge. We can, in some circumstances, engage in the process that I call recursive mentalizing, where to mentalize means to get inside someone's head. To recursively mentalize means get inside the head of someone who's trying to get inside your head or someone else's head. So sometimes you think about, oh my goodness, he's probably thinking that he's probably thinking. Carry that to the limit and we got common knowledge. So an example would be, say, a rumor that a bank might be in financial trouble. And so you think, well, gee, if I had reason to think that probably other people do and they probably are thinking that other people do and they're going to withdraw their money because they're afraid that other people will withdraw their money. If only out of fear that still other people will withdraw their money, I better withdraw my money while they're still money to withdraw because the bank can't cover the deposits of everyone all at the same time. And so you get a bank run. And the bank run didn't begin with a conspicuous signal. The bank is experiencing a run. The bank is in trouble. But it comes from an interplay between some bit of news that leaks out that you then start to extrapolate to what other people might think. Probably a better example, more everyday examples. Bank runs don't happen very often anymore. There was one a few years ago at Silicon Valley Bank that got a lot of attention. By the way, the reason that banks don't suffer from runs anymore is that to solve the problem of bank runs, which are generated by common expectation, that is, people worried about what other people are worrying about, other people are worried. Roosevelt in the midst of the Great Depression triggered by a cascading series of bank runs. First had a bank holiday where no one could withdraw anything. That was kind of a nuisance, but it was really a good thing because you didn't have to worry about other people withdrawing their money. And then federal deposit insurance where a bank has a big gold seal emblazoned on their window that says our deposits are insured. The purpose of that seal is not just to reassure people that their deposits are insured, but to reassure them that other people know that they're insured so it's less likely that the bank will fail. Before there was deposit insurance, Roosevelt's solution to the problem of bank runs, banks were often flaunt their assets with conspicuous opulence. Even small towns, the banks were made of marble, they had gold lettering, they had spacious lobbies. This was kind of considered an insult to many working people. There's an old folk song from the Weavers, sardonically saying the banks are made of marble. But it wasn't, the banks weren't just showing off to kind of flaunt their wealth and insult everyday miners and farmers. They were trying to generate the common knowledge that we have enough assets that you don't have to worry about your deposits evaporating because everyone else withdraws money before you do. Anyway, a more common, this is a bit of a digression, why in general we don't have that many bank runs anymore. But we do have hoarding, such as during COVID when people hoarded toilet paper because they thought there'd be a shortage of toilet paper, which they then caused by hoarding the toilet paper, even though there hadn't been a shortage in the first place. There's another case of common expectation where there we really do engage in recursive mentalizing. No one ever said go out and buy toilet paper, it's in short supply. People just had to think in their mind's eye of other people grabbing toilet paper because they were worried about it. And then that snowballed into the common knowledge that there's a shortage. And I guess maybe this is too simplistic, but I'm guessing that all sorts of financial bubbles follow a similar pattern. Maybe NFTs, non-fungible tokens relied on the common knowledge that these would still be valuable to people someday that then went away. That's right, technically common expectation. Common expectation. Anything to know, but yes, it's the same phenomenon. Indeed. In fact, that's what bubbles are and runs and crashes and panics. John Maynard Keynes had an explanation of all these phenomena in finance and economics that can't be explained by the standard rational actor models of supply, demand, investment, and so on. So he likened speculative investing, which is what generates these bubbles and crashes, where you don't buy something based on the underlying value of the asset. Like someone built a factory, the factory is going to produce so many widgets per year, they're going to make so much of a profit per widget, I'm going to get my share of those profits. I mean, that's kind of the way stock markets ought to work rationally, but we know that they don't. And the reason is, Keynes asked people to imagine a beauty contest. He actually claimed that it ran in the British papers at the time, which is dubious. But the object is not to pick the prettiest face. Like we used to have Ms. Rheingold, probably before most of us were born, a beer ad where there were six models and you had to pick the prettiest. No, in this contest, you had to pick the face that the most other people picked as the prettiest, knowing that they were picking a face trying to out guess everyone else picking it. And he said that that would also often involve, he didn't use the word recursive mentalizing, but that's what he was describing. He said that would often involve the second, third and fourth order of anticipation of anticipation of anticipation. And mathematically, that can lead to runaway behavior when people want to be in on an appreciating stock, which makes the stock appreciate, which makes more people want to be in, except by the time this is called the greater fool strategy of investing. That is, you buy something on the hopes that you can sell it at a profit to someone else. Why would anyone else buy it more than you paid for it? Well, they're hoping that they can sell it to someone else and even more than they paid for it. But soon enough, the market runs out of greater fools or whatever rumor, common knowledge generating salient event triggered the bubble might be contradicted by now a bit of news that causes reverberant fear, fear about fear about fear, and then the bubble can pop. So a lot of the phenomena in finance that don't depend on fundamentals, the irrational exuberance as Alan Greenspan put it, the crashes, the runs for the exits are phenomena of common expectation. And I don't want the common knowledge to get a bad reputation here from all these examples of financial ruin in our future. But there's all these, I don't know what to call them, puzzles, logic games, examples in your book and elsewhere that I've seen that try to illustrate this phenomenon of common knowledge. And I'll confess, despite being pretty good at math and logic, these are almost never illuminating to me. I think the one that came closest was you had a cartoon in the book of three logicians walking into a bar. Is that something that is explicable in real time? Yes. So that doesn't literally involve common knowledge, but it does involve recursive mentalizing that is thinking about what other people think. So as I recall the cartoon, the caption is three logicians in a bar. And the waitress comes over and says, does anyone want beer? And the first one says, I don't know. The second one says, I don't know. The third one says, yes. So that's a logic puzzle. And you can figure it out. If everyone wants beer is true. If each one of them wants beer would be false. If anyone didn't want beer. So if the first one says, I don't know, she must want beer because if she didn't want beer, then she would deduce that everyone wants beer is false. The fact that she didn't say it's false meant that she did want beer. Second one goes through the same logic. The third one, knowing that the first one didn't know and the second one didn't know. Now that means that the first one wants beer, the second one wants beer. The third one, if she does want beer, then she can say yes. So she figured it out. She figured out a state of affairs from the epistemic or cognitive state of the other characters in the bar. So that's a pretty easy. It's intuitive. I mean, you might have to think through it for another couple of seconds. It is similar, not isomorphic, but the way of solving it is similar to what's been called the world's hardest logic puzzle. And this one really is counterintuitive until it's explained. And then it really does make sense. And it goes by various names. The Muddy Children problem, the barbecue sauce problem. I describe it in terms of a bunch of academics at a conference, some of whom have spinach in their teeth, but no one knows who. And they deduce it from, in this case, they really do deduce it from common knowledge. I mean, I can run through it if you're... Yeah, I'm torn. Why don't you run through it just to illustrate either to the audience that it's harder than it looks or they're much smarter than me. I'm happy. Because I can get it if I sit down with a piece of paper and think about it. No, that's right. It doesn't really illuminate right away to me. Same here. So here's the problem. So you've got a bunch of psychologists or academics at a conference in the dining room. Some of them have spinach in their teeth, but there aren't any mirrors around. No one wants to pick their teeth clean if they don't have spinach in their teeth. And everyone's too polite to point out that someone else has spinach in their teeth. But the department chair, who's presiding over the meeting, can't stand it any longer. And at the front, she gets up and she says, at least one of you has spinach in your teeth. Every time I clink the glass, that's an opportunity for you to clean your teeth. Now, she clinks the glass once, no one moves. She clinks the glass twice, no one moves. She clinks the glass a third time. Three people in the room who have spinach in their teeth all clean their teeth. So the first time they did it, the second time, the third time they all did it. How do they know? Again, there are no mirrors and no one's telling anyone else. And here's the explanation. It's a question of kind of mathematical recursion. That is, if you see the logic for one and then for two, then you can then extrapolate and say, well, it applies to any number. So here's the way it works with one. With one, it's really easy. Let's say just the state of affairs, the ground truth is that one academic has spinach in his teeth. So the department chair says, at least one of you has spinach in your teeth. When I click the glass, you can clean it. So everyone looks around. The guy with spinach in his teeth looks around and no one else has spinach in his teeth. If someone has spinach in his teeth, it has to be him. That's easy. That's kind of obvious. Now let's go to the case where two people have spinach in their teeth. So again, department chair makes the same announcement. Everyone looks around. So a person with spinach in his teeth sees someone else with spinach in his teeth. He still doesn't know about his own teeth because all she said was at least one of you. So he doesn't know whether to clean his teeth or not. Now she, seeing the same, pretty much the same thing he does, also doesn't know whether to clean her teeth because she doesn't know whether he's the only one. So the first clink of glass, they don't do anything. Now she clinks the glass a second time. Now each one can think, well, geez, if she was the only one, then on the first clink of the glass, she would know to clean her teeth because she would look around and see everyone else's teeth are clean. She knows she has to be the one. She didn't. So therefore she must have seen someone with spinach in their teeth. I'm looking around. No one else but her has it. It must be me. She thinks the same thing. And so on the second clink, they both know that they have to clean their teeth. That's the logic. If you accept that, then you also realize that three people with spinach in their teeth will clean it on the third clink. If the room has a hundred people and 17 have spinach in their teeth, and assuming they're logicians, then they'll all clean their teeth on the 17th clink. But that crucially depends on common knowledge. It wouldn't work if the department chair went over and whispered something in everyone's ear. And if you didn't know that everyone else knew, then the fact that the woman with a spinach in her teeth didn't clean her teeth would not convey the information that she saw someone else with spinach in their teeth. So it crucially depends on common knowledge. So that's the world's hardest logic puzzle. It's pretty hard. It's not maybe not the world's hardest, but it's unrealistic to think that having been in faculty meetings, that all the rest of the faculty would be perfect logicians in quite that way. But it is realistic to think that the department chair would be annoying enough not to just say three people have spinach in their teeth. And you can just count. It would have made it a lot simpler. So it works better if the academics are logicians. Yeah, very, very much. And they've probably heard puzzles like this before. I mean, there's a similar counterintuitive result that maybe is a little bit more profound, but is really at the heart of this whole game. Elman's agreement theorem, which is one of these things that, you know, very trivial kind of thing to prove. Conclusion that makes you think that can't possibly be right. So why don't you explain that to us? Yes. So this is a theorem that regional people cannot agree to disagree. I mean, that's, well, that's not exactly right. Anyway, so they're improved by an Israeli mathematician, Robert Alvin. He won a Nobel Prize, not for this. I guess that happens with Nobel Prize winners there. Any ideas that, once they don't win Nobel prizes for a ringenious. So this is a very simple, the whole paper is three pages. And he says it's the idea is simple, but it's not intuitive. I misstate it. So here's the theorem. Two rational agents with the same priors in the Bayesian sense of their credence and hypothesis before they've even looked at the evidence that is based on their entire understanding of the world, everything they've discovered so far. Okay. Who then share their, make their posteriors common knowledge. That is, after looking at the evidence, each one can look at different evidence. They don't have to see each other's evidence. They just announce my estimate is point seven that this hypothesis is true. And the other one announces her posteriors. Those posteriors must be the same. That is, it is, they cannot agree to disagree. Now, what's surprising about it? Now, there's a less surprising version, which is if they're completely rational, if each of them. Shares the evidence that, that motivated their posterior, their conclusion, then you might say, well, you know, if he's rational, I gotta trust him. And there's no reason that shouldn't take his evidence seriously just because it's his evidence. And it's not my evidence, you know, evidence is evidence and it's not about me. It's not about him. The, so that would be a little more intuitive. If you swallow the assumptions that they share the same priors and they're both completely rational, the kind of surprise in it is they don't have to actually share their evidence. They just have to share their posterior. That is their assessment of what the evidence means and that those posteriors must be the same. Now, one way to think about it, this isn't actually how he, how the theorem goes, but it was worked out by later logicians is you can imagine one of our announcing her posterior, that is, I think 0.7 to the hypothesis is true. The other one announces his posterior. I actually only think there's a 0.4 degree of confidence that it's true. Then the first one will say, oh, well, if you say, if you say it's 0.4 and I say it's 0.7, I'm going to now update my posterior. Here's my new posterior and then he updates his and they end up in the same place. The idea is they have to end up in the same place if they're both rational. Even if neither one gives the basis for the conclusion, the surprise being that if they do it that way, they don't gradually converge and meet somewhere in the middle, which is kind of how we expect arguments to go. Their positions are random walks that end up in the same place, but could go every which way they could leapfrog each other, they could outdo each other, they could go from a moderate position to an extreme position until the final step in which they end up at the same conclusion. It's a little bit like the spinach and the teeth problem in that it's only on the third clink that suddenly everyone comes to the same realization. This sounds kind of absurd, isn't it good to agree to disagree and when people in a rational argument don't you meet somewhere in the middle? But it forces us, like all mathematical theorems, it's only as valid as its premises are true and sharing priors itself raises a whole bunch of questions. But the reason that I discuss this, the reason I discuss the spinach and the teeth problem, even though there are sort of esoteric mathematical problems, is that I think they do have implications. So in the case of argument, when you think about it, why should two people meet in the middle? Who says that the truth has to lie halfway in between the opinions of two guys? I mean, what guarantees that they'd be straddling opposite sides of the truth? Likewise, why should you privilege your own assessment over anyone else's on the charitable assumption that they're as rational as you are? Now, of course, I think I'm more rational than everyone else, but I would, wouldn't I? Everyone else thinks that too. Everyone else thinks that too. So there really is no reason to privilege your own assessment on the assumption that other people like you are rational. And a final implication is that, and this is a little bit fanciful, but the linguist George Lakoff and a philosopher, Mark Johnson, in a famous book, little book they published 45 years ago called Metaphors We Live By, noted that language contains lots of metaphors that we don't even realize are metaphors, which allow us to talk about abstract concepts in concrete terms. And one of the metaphors they discuss is that argument is like war. He demolished his position. He tried to defend it, but I found the weak spot. And we use the language of war in talking about arguments. And just as a kind of whimsical thought experiment, Lakoff and Johnson says, well, do we have to think about argument as war? Why don't we think of it as like a dance? And as it happens, the sequence of reaching agreement in Alman's construction is in some ways more like a dance than like a battle. That is, it's a random walk, and so you can lurch and weave and bob all over the place before arriving at agreement. So this esoteric mathematical theorem might actually have some insight. And then again, just to tie it to implications we ought to draw, you know, and I know that probably a lot of arguments among academics, among politicians are kind of, you know, pissing contests. It's like, who's going to win? Often people use dirty debating tricks. They set up a straw man. They look for a loophole that the person just neglected to mention. It's not the best way of arriving at the truth to make it a combat sport because the truth is the truth. It doesn't care about your ego. If all you're doing is trying to win, that's not the same as trying to get to the truth. And so Alman's theorem in some ways is an exercise in humility and epistemic humility and might press back against the bad habit that we have. Seeing an argument is something that we want to win. Well, yeah, it's an ideal theory kind of thing, right? It's not a prescriptive or sorry, it's not a descriptive kind of thing. It's more like this is what we should aim for. Exactly. And that's exactly the way I present it. And just to make it the assumption super clear, because like you say, the theorem is as good as its assumptions and the conclusion that two rational people can't agree to disagree. If they have the same priors. Right. So if they have the same priors, if they're both perfectly logical, and I think if they both agree that each other are perfectly logical, that's common knowledge, right? That is totally right. Not only do the posteriors have to be common knowledge, but each other's rationality has to be common knowledge. You are right. Which by the way also applies to the spinach and the teeth problem. Right. That is rationality has to be common knowledge. And in general in game theory, almost everything depends on a background assumption that the parties are rational and that their rationality is common knowledge. I mean, that's how you psych out the other person. You assume that they're rational. And I guess that my without any data and maybe you have some data to share with us, but I'm guessing that the fact that people do disagree is sort of half because they have different priors and half because they're just not convinced the other person's being rational. Yes, I think that's a large part of it. And I discuss a paper by Tyler Cowan and Robin Hansen where they look at the contrast between ideal argumentation and real argumentation and ask, do, you know, why don't we behave like the rational agents in Alman's theorem? And they suggest that people are, they say something that's kind of a common place to any social psychologist, which is the people are kind of dishonest in the sense that they don't approve of other people, you know, bending the evidence in their favor, setting up straw men, but they do it themselves. And is there a feeling that, you know, given this this theorem, etc. that maybe this can inspire us to take more seriously the opinions of others? I'm sort of thinking of there's a common move you get in the media or social media where people will say, Oh, look, I said this and everyone disagrees with me, I must be on to something, which is sort of the opposite of what Alman would have us believe. Well, it's the opposite of, yeah, a lot of, you know, Bayesian thinking in general. And actually, I think, by the way, this is something that I worked out in a, in my book, Rationality, that I think that the bias in, in science journalism, but probably in science itself to favor the paradigm threatening discovery, going against conventional wisdom, overturning the consensus, the rebel, the upstart is probably responsible for a lot of error. Because what it does is, and science magazines love this stuff. The clickbait is, was Darwin wrong, was Einstein wrong? And the reason that it's a recipe for error is that it's, it's very un-Bayesian, it's throwing out the priors, it's, it's treating the latest little tidbit of evidence as if it was reason to change your entire understanding. Whereas if there was some reason for a consensus, for the textbook view, sometimes denigrated as the dogma, well, you know, probably a lot went into that, that's your prior. Maybe if there's a contradictory bit of evidence, you should update and, you know, decrement your confidence, you know, a little bit. But you shouldn't throw everything out the window and just assume that the, the result of the experiment announced this morning is the truth. I think that's one of the reasons why we've had a replicability crisis, that the journals themselves, but also the science journalism, gives undue weight to the particular discovery and downplays the, the prior. There's one physicist I'd never heard of him, maybe his name is Zeman, who said that you might disagree with this. This may be a bit of an exaggeration. He says 90% of what's in the journals is false, 90% of what's in the textbooks is true. I believe that. That's, you know, I get it. I get the spirit of it anyway. Yes. And there's a famous epidemiologist, John Ioannidis, who published a kind of scandalous paper about 20 years ago, why most published research findings are false. And often the reason is that if you just confirm the consensus, you know, you don't get a publication out of it. Exactly. But I'll be honest, I've often struggled with this question of what to do with consensus is because on the one hand, you make progress by showing the consensus isn't right, right? On the other hand, the consensus is usually right. So, so like you don't want to, you don't want to sort of default 100% to either side. And I think that. No, no, that's right. Yeah. I was a group. I was theorem not withstanding the real world implementation of it is tricky. No, indeed. I think what we can say is that that there is a widespread tendency in science journals and also in science journals and among scientists to over update in response to a single data point. I think that's probably a bad habit. Going back to Alman and it's, you know, what does it mean? The, you know, you've heard of the so-called rationality movement, which is, you think, how could that be a movement? Aren't we all supposed to be rational all the time? Is there the irrationality movement on the other side? Yes. So the rationality movement, it's sort of unofficial headquarters in Berkeley, is the attempt to call attention to the fallacies and biases that cognitive psychologists and behavioral economics has documented to try to overcome them, often with Bayesian reasoning. Sometimes the rationality community is called Bayseans, but they have kind of cannons of argumentative hygiene or best practices, which include things like stating your degree of credence, your posterior as a quantity between 0 and 1, instead of saying, you know, this is right, that's wrong, saying I have about 0.7 confidence in this. Steel manning rather than straw manning your opponent, that is, don't set up a straw man that's easy to knock down, but argue against the strongest version you can imagine. Engage in adversarial collaborations where you get together with your worst enemy and decide a priority. What would settle the issue to both of your satisfactions and then go out and gather the data? So all these practices, which are kind of in the spirit of Alman, so much so that the conference center in Berkeley that was set up as kind of the home for rationality conferences, its main meeting room is called Alman Hall. So we can all, no one comes out disagreeing, that's great. I'd like to do that experiment. Okay, so good. I'm glad that, you know, the audience was kind enough to go with us on this little journey of logic and formalization and Bayesian reasoning. But one of the fun parts of your book is sort of demonstrating the implications of common knowledge in our everyday life, right, as human beings. And so you draw an interesting distinction between cooperation, which does rely on common knowledge and coordination, which also does, but in a sort of more central way maybe. Yeah, so I mean, admittedly, these are somewhat specialized usages, but cooperation hasn't been discussed in evolutionary biology and in economics, experimental economics. Usually versus the case where one person or one animal for that matter confers a benefit to another at a cost to itself. And it's a weighty scientific problem because one could ask how could cooperation, in particular how could altruism, probably a better term, ever have evolved given that all things being equal, you'd expect that natural selection would favor selfishness. And, you know, since Richard Dawkins spoke of the selfish gene, there's been a lot of discussion on how cooperation can evolve through reciprocity, through reputation, and so on. But what I came to realize is that a lot of cases of organisms conferring benefits on one another are not altruistic, but they're in the sense that one of them incurs a cost, you know, raising this puzzle, but are mutualistic. That is, everyone wins. So in case of a bird that ticks off the back of an ox, the bird doesn't have to be repaid. The ox doesn't have to, you know, pick ticks off the back of the bird, even if it could. But the bird gets a meal, the ox gets fewer pests, and everyone wins, except the ticks. Don't get a boat. And so these are cases that biologists call mutualism. And a lot of human working together is not altruistic, but it's mutualistic. Everyone, both parties win. You know, a potluck dinner, you bring complimentary courses. You're not doing someone a favor by not bringing dessert while they bring dessert. You know, both of you end up better off that way. Two people meet for a coffee date. It's important that they both pick the same place, but neither one is doing the other a favor by going to that place. They both want to end up in the same place. Right. The reason that the mutualistic coordination is also a scientifically interesting problem is not because of the danger of being exploited as an altruistic cooperation where someone I keep getting all the goodies, but I never repay them when my turn comes. The problem in coordination is one of knowledge. Namely, how do you end up on the same page, given that it isn't enough to know what the other guy's going to do. You have to know the other guy knows what you're going to do, and so on and so on and so on. So the problem of coordination, the logical problem of coordination, requires common knowledge as its solution, in turn raising the question for a psychologist, how do people attain common knowledge? You know, this is probably petty of me. Not even petty, because I have no dog in the fight, as it were, but as soon as you mentioned Richard Dawkins and the selfish gene, I thought of a few years ago there were these debates about the origin and explanations for altruism, etc. So, I'm sure you were aware of the center-down kin selection versus group selection. I'm sure you were aware of that. Oh, yes. They were not perfect examples of all-money and rationality at work, I thought. Perhaps not. It gets pretty vitriolic among the people. If anyone wants to dip into that, I have a, you can Google a paper that I wrote a number of years ago called The False Allure of Group Selection. It was published on Edge and has commentaries by some defenders of group selection. I think the whole notion is rather confused. Perhaps I was not being as rationalistic as the rationality community would recommend. Maybe I did not steelman my defenders. I think I did, but the thing about all these things is I'm not the one to judge. Of course, I would think that I am. Exactly. But also, to be fair to us as academics, we do have this practice of writing something, publishing it, and then inviting responses, including from people who disagree with us, which is something that might be a model for elsewhere in the world. Indeed. And which is why academic freedom, another one of my hobby horses I co-founded the Council on Academic Freedom at Harvard, is so, I would argue, indispensable. Not because academics are special. They should be allowed to do whatever they want. We're not privileged to compare to anyone else, but just because without academic freedom, you can't converge on the truth because the process you just described, namely, you publish something, you might be right, you might be wrong, you don't know. We'll only know when other people get to attack it, try to falsify it. If you disable that process by canceling or punishing someone for what they believe, you're never going to find out what's true or false. Okay, good. So let's take into more of this cooperation coordination question. I mean, there does seem to be, or maybe I'm presuming where it's not there, but there's a chicken and egg or apples and oranges problem. Like, how do we all know that we have the common knowledge to drive on the correct side of the road, etc. Is this something that, you know, is a capacity that human beings have? Is it different among different species? Yeah, I think we do have, you know, and other species do have coordination problems that they have to solve by, not by common knowledge, because the other species, most of them aren't very bright. But through a similar mechanism, namely a conspicuous public event. So I mentioned that's the typical way in which we humans gather common knowledge, which we use in all kinds of ways to coordinate evolutionarily unprepared, like money, or organizations and institutions. But even an organism as simple as coral, which doesn't even have a brain to think to have thoughts, let alone thoughts about thoughts. But they face a coordination problem because they're stuck to the ocean floor. They got to reproduce. You know, they can't go around dates. They can't, they can't even have intercourse because each one of them is their sessile. They're stuck to the floor. What do they do? Well, they spew gametes, eggs and sperm into the ocean, but, you know, kind of in the hope that they'll meet up with their counterpart from some other coral. But the problem is they can't spew out eggs and sperm 24-7. It's kind of metabolically expensive. It's in all of their interest to kind of somehow agree in scare quotes on what day to do it. Now, you know, they can't talk. They can't think. They have to kind of tacitly agree or behave as if they agree. And the way they do it is they use the full moon as the common knowledge generator. A fixed number of days after the full moon differs for different species. They engage in what marine biologists call the Great Barrier Reef Annual Sex Festival. Namely, five days after the full moon, they all spew. And so the egg and the sperm find each other. And it's, you know, they literally have common knowledge, but they solve a coordination problem by a public conspicuous event. So that absolutely makes me think of an analogy I just thought of that would be completely useless to everyone but myself. But I feel the need to give it anyway, which is the horizon problem in inflationary cosmology. When we look at different directions of the sky and see the relic microwave background radiation from the Big Bang, they're the same temperature. Even though they were never in causal contact with each other, you know, in the traditional cosmology, how did they know to be at the same temperature, even though the temperature changes with time? And the inflationary universe scenario is a big common event that actually sort of tells them to set their clocks in the same way and therefore they can be more or less the same temperature. Interestingly, so they're too far away to exchange information faster than speed of light. But rewind the clock and there's a point at which they were, you know, cheap by jowl. When you introduced the phase of inflationary expansion at early times, now they were in causal contact with each other and this tiny little patch of space expands to put them so far apart that looks like they were never talking to each other. But in fact, there was a secret communication. Interesting. At least a secret interaction. Yeah, not going to help when you're out there on the street trying to explain these things, but that's okay. So, but it got very interesting in the book, which I do recommend to people. It sort of goes both ways, this coordination problem. Like you already hinted at this earlier on, but sometimes we are abetted by taking advantage of common knowledge and everyone drives on the right side of the road. Other times we sort of don't want there to be common knowledge or we speak in intentionally elliptical ways so we can have some plausible deniability. Exactly. Exactly. And that's in a chapter of the book called Weasel Words. I discuss why we so often speak in euphemism and in Uendo and hints and also why even in the nonverbal equivalent we avoid eye contact. For example, I have another chapter called Laughing, Crying, Blushing, Staring, Glaring on nonverbal displays that I argue are common knowledge generators. So eye contact, you're looking at the part of the person that's looking at the part of you that's looking at the part of them, etc. Blushing, you feel the heat inside your cheeks at the same time as you know other people can see the reddening on the surface of your cheeks. And they know that you know that you're blushing. Laughter, your breathing is interrupted at the same time as other people can hear the staccato sounds of laughter. So all of these are common knowledge generators that sometimes we try to avoid. We stifle a laugh, we choke back a tear, we avoid eye contact. Saying is like, can you look me in the eye and say that? I went somewhere that's trying to avoid common knowledge and you're trying to generate it. I truly don't know the answer to this. Are the rules and implications of things like eye contact and blushing universal among cultures? Probably like a lot of universals, they're statistically universal. A lot of 100%. Yeah, depending on the context with some exceptions and some parametric variation. But probably eye contact, for example, as a potent signal, often a signal of threat, I suspect is universal. It certainly operates in other primates. But it's also a signal of like romantic interest. Yeah, so we humans just kind of take what we evolved with and we repurpose, we repair it. So eye contact, which in other primates is generally a threat signal. The dominant stares at the subordinate who looks away. If their eyes lock, there's going to be a fight. And that's also true of humans, as in the bar room taunt you looking at me. Or the ultimate fight club stare down with the two of them looking to each other's eyes and see who flinches. But in humans, as in the, can you look me in the eye and say that eye contact is more general. It's a signal that what has so far been private knowledge between us is here to for common knowledge. And one of the most common examples is flirtation. In flirtation, as with the dominant staring at the subordinate who looks away, the flirture looks at the flirtee, who then kind of looks away, keeping it at a level of flirtation. If their eyes lock, that often means that there's something serious that's going to happen. My late colleague, Irv Dvor, a biological anthropologist at Harvard, used to tell his class, if two people anywhere on earth look into each other's eyes for more than six seconds, then either they're going to have sex or one of them is going to kill the other. Is this something that we could even raise to the level of predictive theory? Can we think this way and make predictions for psychology experiments we haven't done yet? Oh yes, and I do that in the book. I've published a fair amount of experimental work testing predictions from ideas about common knowledge. Can you give us some examples of that? Yeah, let's see. We did a study on self-conscious emotions, that is embarrassment, shame, guilt. And the hypothesis was that what makes someone feel self-conscious is not so much that some faux pas or infraction was detected, but rather that you acknowledge that it was detected. It's a common knowledge that drives the acute embarrassment. You can get away with something if you don't... I'll give a rude example, but let's say you pass gas. And it's audible enough that you suspect others have noticed it. Everyone knows. However, if you were then to meet someone's gaze, that would be way worse. You could kind of look away, pretend they didn't hear it, pretend that they don't know that you noticed them hearing it, but that what is truly mortifying is the common knowledge. And so we had people imagine themselves in various compromising circumstances and vary the levels of knowledge of, does the onlooker know, do you know the onlooker knows, does the onlooker know that you know that they know, and so on, and found that indeed what was most mortifying was common knowledge. Then, because we didn't want to just do it hypothetically with people kind of fantasy playing, we actually put them in a circumstance in which they could be embarrassed. Namely, they had to give a Karaoke song performance. Basically, it shows Adele's role in The Deep, which has a soaring chorus, and they were told that their vocal stylings were being judged by a panel of fellow students. And they could see their fellow students in a video feed. In reality, it was a recording, but we told them it was live. And either they thought that the panel of judges knew that they were seeing the panel of judges, or that the panel of judges did not know that they, the singer, knew that they were being observed. And then we asked them, well, okay, now you've sung the soaring chorus. How embarrassed did you feel? And they felt way more embarrassed if they thought that the judges knew that they knew that they were being judged by the judges. And there are everyday examples where two people, one of them might suddenly realize that the other one kind of insulted them or worked against them. And each one of them can kind of even suspect the other, but as long as they keep it, neither one says it, they can maintain their friendship, and neither can feel embarrassed. Is it common, is it an example of common knowledge that something embarrassing just happened, but we're not going to acknowledge it? So if we don't acknowledge it, I suspect, I suggest that's what keeps it out of common knowledge. That is, there could be private knowledge, there could even be reciprocal knowledge. I know that he knows, he knows that I know. But there may not be common knowledge, that is, I may not know that he knows that I know, etc. My argument is that it's the common knowledge that drives our relationships and most strongly drives our self-conscious emotions. Our awkwardness, our shame, our mortification, our embarrassment, our outrage. I guess I'm just asking, could there be relevant examples where there is common knowledge? We all know something just happened, we all know it's embarrassing, but we socially agree not to acknowledge it. We do, oh yes. So that's, that is the elephant in the room. That's the pretending to look away. Think of it as common pretense. That is, there is some common knowledge there. The common knowledge is that we pretend as if something opposite to reality is the case. And people often, I have a discussion of how, you know, someone has a speech impediment, if someone is obese. Everyone knows it, but you try to avoid talking about it. I mean, it's a little bit odd and one could even argue that it is dysfunctional. I have a, a reproduced an interview by a woman named Lindy West who quote, came out as fat. Now, that is, it's a little bit weird, the analogy of coming out of the clock, coming out as gay. Because, you know, as the interviewer said, you know, no one's going to say, oh my God, dude, I can't believe you're fat. You know, they, you know, they knew that. But, but she said the burden, you know, I appreciate people's considerateness, but you kind of the burden of pretending that I'm not fat kind of distorts things. And I think we'd be better off if, look, I'm fat and you know that I'm fat and, you know, let's just get it on the table. By the way, get it on the table. Another metaphor for common knowledge. Now, not everyone would go along with her. And I certainly would not be the first to say that, you know, one of my companions is has a high body mass index, even if we're obvious to everyone. But, but it shows the tension between what everyone knows and what everyone knows that everyone knows and this common pretense, this elephant in the room metaphor of pretending that something, commonly pretending that something isn't true. Yeah, I'm feeling now, you know, I don't know whether weighted down or energized by the knowledge of all these sort of conventions that we have chosen to sort of get through the day. And how they can fail. I was once told, I have no idea why this is true, that the reason one of the reasons why France and Germany always had wars with each other is because the French get insulted when you don't fill them in on everything you know. And the Germans get insulted if you assume they don't know something and you try to tell them. So whenever they would have peace negotiations, they would end up, you know, in recriminations. Well, interesting. So whether or not that's, that's literally true. What is true is that an awful lot of wars are fought over, saving face, losing face, honor, humiliation, including the war in Ukraine right now. You know, what is it about? It's really about Russia's desire to undo their humiliation at the hands of the West. There's the scene in Ducksuit, the Marx Brothers movie in which Sylvania goes to war against Fredonia because Grotjo Marx, the ambassador, imagines what it would be like if the other ambassador refused to shake his hand. And so. The levels of trying to anticipate what the other person is thinking. This is something very familiar to poker players, right? Because you have to say, you know, I think this person thinks that I think this, that I think this and I presume that in poker, since it's a finite state game, there's only some certain number of things that can happen. There should be some equilibrium that you eventually hit. But are there psychology studies on how good human beings are at going to the level of thinking the other person thinks something and then that they think that I think something? So I imagine, I don't know if there are how many studies there are, but what I do know is two psychologists, one of whom was a former student of mine, have made careers in poker, each one becoming a celebrity in the process. Maria Conakova, who was my undergraduate. Former minescape guest, yes. OK, at Harvard and Annie Duke, who I knew as a student, she wasn't my student, but she originated in the same field as me, child language acquisition, before making the leap to being a card sharp. But both of whom are cognitive psychologists, gifted cognitive psychologists, and who presumably put their tacit knowledge to work. Now, poker is very interesting because we have the expression of poker face that any tell can be used against you. And that's a, and it's obviously a quintessential game theoretic situation. In fact, John von Neumann invented game theory to deal rationally with poker because it was a game of imperfect information, a game of strategy. It was not like chess, which was perfectly determinate. Poker involves bluffing and calling and so on. And it's a case in which either a poker face, and in some cases, a ability to be perfectly random is an advantage. Because as soon as you deviate from being random, that is something your opponent could use against you. So it's an out-guessing standoff. As in, say, in hockey, the shooter can shoot left or right. The goalie can defend left or right. If either of them has a preference, then the other one can use it to their advantage. The optimal shooter and the optimal goalie have a mental random number generator. Which is very hard for human beings to do. Which is very hard for humans to do, yes. Yeah, that reminds me. I mean, maybe this is a good place to sort of wind up about, and I'm not sure about whether this is a relevant example or not, because I made it up rather than getting it from your book, but sort of abuses of the idea of common knowledge. And I'm thinking, the example I thought of was the OK sign, when people hold up their fingers to show OK, and how this has been co-opted by white power groups to show that they're in that in-group. But then when people say, oh, you're making the white power sign, they say, what do you mean? I'm just making the OK sign. Well, in a notorious case, an innocent truck driver got fired. Oh, I didn't know that. Someone caught him on a cell phone video making the OK sign. This poor schlamill, it didn't have a racist bone in his body. He was Hispanic, and he lost his job. This is what led to Yasha Monk to write an article kind of at the peak of during peak wokeness, said stop firing innocent people. But yes, so just going back to common knowledge. Common knowledge is relative to a community of knowers. You have common knowledge within some network. And if you're not part of that network, what's common knowledge to all of them may not be common knowledge to you. The common knowledge may not include you. And is it an exaggeration to think of a failure of common knowledge gets in the way of stopping dictatorships? Like if you have a populace or even an establishment that all wants to stop somebody from doing something, not going to mention any names, but just hypothetically, but none of them wants to be the first mover or whatever. There's a coordination problem there because a single person resisting will be stomped down, even if everyone resisting at once would succeed. Totally. A big time. And this was a point made by Michael Tse in a book called Rational Ritual, a predecessor to mine 25 years ago or so. We noted that public demonstrations can generate common knowledge when everyone in a public square can see everyone else, and that can give them the safety and numbers to coordinate resistance, whether by storming the palace or by engaging in work stoppages. I quoted, he could have quoted, but I came across or called to mind a quote from the character Gandhi in the eponymous movie, where he tells a British colonial officer, in the end you will leave because there is simply no way that 100,000 Englishmen can control 350 million Indians if the Indians refuse to cooperate. So that captures it, but he could have said coordinate. That is 100,000 Englishmen can control 350 million Indians if they can control them one at a time. They just can't control them all at once. And so it can be a demonstration in a public square. It can also be a newspaper article or magazine article. That's why autocrats don't allow freedom of the press, why they have censorship and repression. The Arab Spring was kindled by social media, by Facebook and Google, until dictators kind of cottoned on to that danger and started to control the internet. I mean, famously a few dozen Spaniards kind of did conquer millions of indigenous Americans back in the day, because they were not able to coordinate in any way. Well, yes, and they were helped along by, as Jared Diamond put it, guns, germs and steel. The germs helped, absolutely, yeah. The germs helped, too, yeah. So I love the point about the demonstrations. That's an interesting point. I mean, I think of demonstrations as largely making the demonstrators feel good. Like, I'm in favor of them. If you feel about it, go ahead and demonstrate. But just the symbolic act of letting other people know there is so much resistance out there can have helpful coordinating effects. Yeah, that's why it'd be different than, say, a public opinion poll that showed that a majority of people were disgruntled by the regime. Or at least if the opinion poll itself doesn't become common knowledge. But when, so if I was a rebel and I had the results of an opinion, a confidential opinion poll, it wouldn't do me that much good. Oh, great. Everyone agrees with me. But I'm still going to get imprisoned if I protest. But if everyone does it at the same time, which they will do only if they know that the others will do it at the same time, and common knowledge is necessary for that to happen, which is why many of the quiet revolutions of the last 30 years, the Velvet Revolution, the Moser Revolution in some former Soviet republics, often were triggered by some kind of coordinating signal. Everyone's cell phones went off at the same time. People tied tin cans to the tails of stray cats. And that, which were considered highly subversive and stamped out by the authorities simply because of their common knowledge generating power. So I cited a joke from the old Soviet Union where a man is handing out leaflets in Red Square and, of course, KGB arrest him. Bring him back to KGB headquarters, only discover these, but handing out blank sheets of paper. They think of him and says, what is the meaning of this? He says, what's there to say? It's so obvious. Everybody knows. And this is a joke about common knowledge. Well, here's the crucial thing. Yes, everybody knows, but when he handed out the sheets and people took them, now everyone knows that everyone knows. And that's what the authorities could not tolerate. And indeed, in Putin's Russia, people have been arrested for carrying blank signs. I don't want to say too much about it because I've done a podcast interview, which I recorded before this one, but will air after this one. But one of the interesting results was mentioned in it. It was about people who believe conspiracy theories. They tend to wildly overestimate how many other people believe those conspiracy theories. Like if it's something that 5% of the world believes, they think it's 60% of the world believing it. And so you've convinced me that maybe an increase in our overall ability to have not just knowledge, but common knowledge might make the world a more rational place. Well, I mean, there's a name for that phenomenon. It's called pluralistic ignorance or a spiral of silence where everyone believes that someone believes that they believe it, but no one actually believes it. And it's a case of common misconception and private knowledge. All right, well, we're going to try to clean things up. Steven Pinker, thanks very much for being on the Mindscape podcast. Thanks for having me, Sean. Great conversation.