332 - Concordance Over Truth Bias (rebroadcast)
69 min
•Feb 2, 20263 months agoSummary
This episode explores the "concordance over truth bias," a newly identified cognitive distortion where people prioritize information alignment with their political beliefs over factual accuracy. Researchers found that partisan bias is twice as influential as headline truth in determining what people believe and share, even when presented with obviously false information that aligns with their views versus true information that contradicts them.
Insights
- Political concordance (alignment with existing beliefs) is approximately twice as influential as factual truth in determining belief and sharing behavior across all education and cognitive ability levels
- The objectivity illusion—believing oneself to be objective while being highly partisan—is the strongest predictor of susceptibility to concordance over truth bias
- People are more likely to reject true headlines that contradict their political views than to accept obviously false headlines that support their views
- Education, analytical reasoning, and cognitive reflection skills do not buffer against concordance over truth bias, suggesting the problem is not simply lazy thinking
- Current solutions focusing only on fact-checking and supply-side reduction of fake news are insufficient; demand-side resistance to inconvenient truths must also be addressed
Trends
Rise of algorithmic information silos creating false sense of objectivity among users consuming one-sided media dietsIncreasing affective polarization (antagonism between political groups) outpacing ideological polarization in societyShift from active information seeking to passive algorithmic feed consumption reducing user awareness and consentIntegration of generative AI with social media platforms amplifying coordinated disinformation at scaleDisconnect between surveillance capitalism's promised effectiveness of targeted advertising and actual measurable resultsGrowing recognition that media literacy and fact-checking interventions may backfire among highly partisan audiencesEmergence of need for community-led, bespoke interventions rather than top-down institutional approaches to misinformationConvergence of technology infrastructure with nation-state surveillance capabilities creating comprehensive information control systems
Topics
Concordance Over Truth BiasConfirmation Bias and Disconfirmation BiasPolitical Polarization and Affective PolarizationFake News and Misinformation DetectionComputational Propaganda and Algorithmic ManipulationMedia Literacy and Information Literacy CampaignsObjectivity Illusion and Partisan BiasSocial Media Information EcosystemsCognitive Reflection and Analytical ReasoningEpistemic Communities and Shared RealitySurveillance Capitalism and Targeted AdvertisingGenerative AI and Synthetic ContentOne-Sided Media ConsumptionIntellectual Humility in DiscourseDisinformation Research Methodologies
Companies
The Onion
Satirical news website whose articles were frequently mistaken for real news, inspiring the Literally Unbelievable pr...
Facebook
Social media platform where thousands shared false Onion articles about Planned Parenthood, demonstrating early fake ...
Reuters
News source used to provide verified true headlines for the research study on concordance over truth bias
CNN
News source used to provide verified true headlines for the research study on concordance over truth bias
New York Times
Referenced as example of institutional media facing challenges in producing quality information against social media ...
Wikipedia
Referenced as part of original internet infrastructure that prioritized information access over data extraction
TikTok
Social media platform cited for algorithmic amplification of concordant content leading to harmful behavior changes
Instagram
Social media platform discussed for algorithmic feed-based information consumption without user awareness or consent
ChatGPT
AI system discussed as example of homogenized information drawn from limited demographic and historical data sources
People
Samuel Woolley
Professor at University of Pittsburgh studying computational propaganda, disinformation, and online manipulation tactics
Katie Joseph
Researcher studying neuroscience of misinformation and how algorithms manipulate discourse and social norms
Michael Schwab
Postdoctoral fellow at Stanford studying cognitive processes in polarization, misinformation, and interventions
Jeffrey Cohen
Psychologist who co-authored the concordance over truth bias research paper but not interviewed in episode
David McGraney
Host and producer of You Are Not So Smart podcast; author of How Minds Change
Shanto Iyengar
Stanford researcher whose work shows partisanship is now the primary social cleavage causing animosity
Herman and Chomsky
Authors of Manufacturing Consent, whose work on propaganda was referenced and built upon in recent research
Britain Heller
Researcher cited for Art of War concept of building golden bridges for enemies to retreat across
Wilson and Breck
Authors of 1994 paper outlining five-step process for awareness and correction of cognitive biases
Quotes
"The more you believe that you're not going to do this, the more likely you are to do this."
Samuel Woolley•Early in episode
"The more you consume only one perspective or one silo of information, the more you think that you are more objective because you're only seeing that one side."
Katie Joseph•Mid-episode
"Truth is inconsequential according to this study."
Michael Schwab•During findings discussion
"People need to be made aware of the existence of the bias, but they also crucially have to accept that they're vulnerable to the bias."
Michael Schwab•Solutions discussion
"We have to build a golden bridge over which our enemies can retreat."
Samuel Woolley (quoting Britain Heller)•Conclusion section
Full Transcript
You can go to kitted, K-I-T-T-E-D dot shop and use the code SMART50, S-M-A-R-T five zero at checkout, and you will get half off a set of thinking superpowers in a box. If you want to know more about what I'm talking about, check it out, middle of the show. Welcome to the You Are Not So Smart Podcast, episode 332. My name is David McGraney. This is the You Are Not So Smart podcast. And let's open this episode about fake news with a question. Have you ever commented on a post or shared an article on social media or sent a link to a friend only to later learn that it was either partially or completely fake? Like actual fake news, has it ever actually tricked you just by seeming as though it was probably true? Whether you've shared a bit of fake news or not, you've probably been exposed to quite a bit of it. And you've probably received a link from a friend or a family member or seen a post on their social media feed that was, to you, obviously fake, but to them raised no skeptical alarms. Back during the Obama years, back when social media had only been around for about a decade, there was this website named Literally Unbelievable that cataloged examples of people doing this. Specifically, it cataloged people mistaking satirical articles from The Onion for real news. The Onion, if you are unaware, is a wholly fake news website that writes satirical articles and headlines for the sake of comedy. And the founders of Literally Unbelievable were inspired to start their website after noticing how many people were sharing and commenting on Facebook an Onion article with the headline, Planned Parenthood opens $8 billion abortion plex. You can still read this article on the Youngins' website. The lead paragraph reads, Planned Parenthood announced Tuesday the grand opening of its long-planned $8 billion abortion plex, a sprawling abortion facility that will allow the organization to terminate unborn lives with an efficiency never before thought possible. And the article goes on to detail how the mall-like facility will feature coffee shops and bars and restaurants and retail stores, a 10-screen theater, and more. Thousands of people shared this article on Facebook, believing it was true. Thousands of people commented on Facebook underneath this shared article, expressing how angry they were that this sort of thing was happening in Obama's America. Now, Literally Unbelievable has since ceased operations, but you can still check it out on the Internet Archive's Wayback Machine. I did that just now, and here are some of the more popular Onion articles people believed were true back in the 2010s. 42 million dead in bloodiest Black Friday weekend on record. Congress threatens to leave D.C. unless new capital is built. Obama begins inauguration festivities with ceremonial burning of Constitution. Scientists successfully teach Gorilla, it will die one day. Now, what I love about Literally Unbelievable, it also makes me sad, but I do love this, it's how it revealed just how quickly digital media adapted to commenting and sharing and subscribing. Both legitimate and completely fake news websites learned very quickly how to generate powerful clickbait, the kind that bypasses our skepticism and encourages engagement, the kind that avoids the, there's no way, reaction, and instead favors the, yeah, that sounds about right, reaction. There are many psychological terms for all of this, one of which is disconfirmation bias. That's the human tendency to apply excessive skepticism and scrutiny to information that contradicts your beliefs while readily accepting evidence that supports them. It's the impulse to hold opposing ideas to an impossibly high standard of proof while accepting familiar beliefs without raising an eyebrow. When it comes to news headlines and news stories and internet content in general, we are often, without our conscious awareness, quite selectively skeptical of incoming information. We selectively apply critical thinking, often accepting without question news stories that confirm our assumptions and attitudes and beliefs about the world, while powerfully scrutinizing news stories that seem to contradict our assumptions and attitudes and beliefs. And there are many things that contribute to your skepticism or lack thereof from topic to topic, moment to moment. But when it comes to political headlines, we seem particularly selective. Psychology has conducted some fascinating research into all of this and landed on arousal as the number one motivation to share an article with others. It's the number one thing that is likely to bypass your skepticism. The more a headline or article arouses you, the more likely you are to share it without fact-checking it first. Arousal is the psychological term for when your autonomic nervous system gets triggered in a way that directs your attention to the matter at hand. And the emotions that most trigger arousal when it comes to news headlines are those that make the people you don't like, politically speaking, look bad in a humorous way, or the sort that make the people you do like look particularly good. Or, and this is often the easier way to generate arousal, headlines that, politically speaking, generate fear and or anger. And the research into all of this is ongoing. Here are two example headlines from a very recent study into all of this. Imagine you are scrolling social media and you run across one of these headlines from what appears to be a reliable news source. Headline one, Trump says, I don't like poor people during private meeting with business moguls. And here's another headline. Free stay for veterans at Trump Hotel in Washington, D.C. In the study, they showed people headlines like these, these headlines in particular, and measured their likelihood of sharing them on their social media feeds or sending them to friends. Now, both of these headlines are fake. One arouses people who don't like Donald Trump, and one arouses people who do. And in both cases, people not only believed these headlines, they said they would, yes, probably share them. And I'm wondering, what would you do? If you ran across one of these two headlines, would you believe them? Would you share them? Because the researchers who wrote these headlines and presented them to study participants not only found that people were highly likely to share them depending on their political ideology, they also found something that I found rather arousing, psychologically speaking. So much so that I want to share with you what one of those researchers told me. The more you believe that you're not going to do this, the more likely you are to do this. That was the voice of Samuel Woolley, who is a scientist who studies propaganda and disinformation. More specifically, he studies how nefarious actors purposefully manipulate our thoughts, feelings, and behaviors using the tools of propaganda and disinformation. And he also studies who exactly is using such tools right now and why they're using them and how. The more you consume only one perspective or one silo of information, the more you think that you are more objective because you're only seeing that one side. That's the voice of Katie Joseph, who is also a scientist who also studies propaganda and misinformation. Specifically, she studies the neuroscience of how brains interact with misinformation and propaganda delivered via digital media and algorithmically generated content streams. And as Katie Joseph just described, the more a person becomes informationally siloed and thus, in effect, the less objective they become, oftentimes that results in a perception on their part that they are becoming more objective. The people who, in a sense, believed that they were the most objective and least biased when it came to their being influenced by political concordance, when it came to their partisan bias, they were the most biased and least objective. That's Michael Schwab, a social psychologist at Stanford who studies polarization, disinformation, and the negative cognitive impact of inequality and the effectiveness of interventions aimed at defending against all of those things. And so in this whole bigger conversation we're having around this kind of quote, intiification or AI slop or increasing fracturization or fragmentation of the information ecosystem, it is concerning to say that like, okay, I've been following this one influencer or this one thread of information, and I've watched every single one of their videos, so I do think that they're authentic, or I've explored so many angles of this one topic area, but you're in that one little silo, and you think that you're more and more objective, so you're not encouraged to look further. In this episode of the You're Not So Smart podcast, we sit down with three researchers, all of whom scientists who study disinformation and propaganda, whose new paper found something surprising. Sure, nefarious actors can leverage our confirmation bias, our propensity to believe in and share information that confirms our assumptions and worldview, but this new research shows that our disconfirmation bias must also be taken into account. After the break, we meet these scientists, ask them why they conducted this research, learn how they conducted this research, and ponder what are the major takeaways, including what should we and can we do about this? All that after the break. Okay, that thing I said I would talk about in the middle of the show. It's not quite the middle of the show, but here's the thing. So curiosity is this unusually common trait of people who listen to this podcast. You may have noticed that about yourself. And if you're the kind of person who wants to understand how minds work and sometimes don't work, which is clearly who you are because you listen to the show, you are probably super interested in critical thinking. If you are the kind of person who is right now listening to this podcast, then you might also be curious to find out about the higher order thinking skills course that I am co-presenting at the Executive Thinking Academy. The Executive Thinking Academy. It's about executive thinking, like the executive centers of your brain, but also executive thinking too, if that's what you want to do with it. It's a four-week course to level up your strategic, creative, critical, and executive thinking skills. But it's a bit different because first, it's not a passive exercise in watching a video and then filling out some multiple choice questions. Instead, you will be actively participating in hands-on activities using templates and frameworks that you can use well beyond the course itself. It's a genuinely interactive experience that will help you to think in new ways. You also get the full set of kitted thinking tools with more than 200 beautifully designed physical cards in these fancy magnetic boxes that you can use to plan and facilitate workshops, elevate brainstorming sessions, supercharge strategy planning, and much more. These cards, they have digital versions. They have QR codes on them. They have a whole thing that you can use on a website to make them cool. the course is incredible and it shows you a bunch of ways to use those cards at your workplace or anywhere else. And you'll have the option to learn collaboratively with a small group of like-minded peers so that you're holding each other accountable and encouraging each other to push your thinking boundaries. Plus, you won't just get access to this one course. You get 12 months of membership to the Executive Thinking Academy itself, and that includes webinars, in Q&A sessions with global thought leaders, with authors, with academics. It's a whole lot of stuff and you get 50% off if you use the code SMART50 at checkout when you visit kitted.shop. Half off SMART50, kitted.shop. If you are curious to learn more and to join me for next month's higher order Thinking Skills course, head over there right now, click on the link in the show notes and lock in your place. And now we return to our program. My name is David McGraney. This is the You Are Not So Smart podcast. And in this episode, we are exploring a newly named cognitive distortion. It's called the concordance over truth bias. And our guests on this episode are three of the scientists whose new paper outlines how it works. One of those scientists is Samuel Woolley. I'm Sam Woolley. I am a professor over at the University of Pittsburgh. I just landed here after spending five or six years at UT Austin. I am here at Pitt. I am what's called the Dietrich Chair of Disinformation Studies. So that means I focus on the purposeful spread of false information. Broadly speaking, though, my research and work has been on propaganda online. So the ways in which propaganda spreads online with a particular focus on what my colleagues and I call computational propaganda. So the involvement of algorithms, AI, bots, anything computational in spreading manipulation of public opinion. So we wrote a book called that a while back. And I have another book that came out last year, along with a few others, called Manufacturing Consensus, which riffs on Herman and Chomsky's work on manufacturing consent, but for the digital age. So that's me in a nutshell. I really am interested in studying the producers of propaganda and disinformation, but also the impacts of it too. Another one of those scientists is Katie Joseph I was a former misinformation researcher though I occasionally still do research The long and short of it is I very interested in how people make decisions And so when I was an undergraduate, I studied social neuroscience and then also international security to understand the micro and the meta mechanisms of why are we making the decisions we're making in society. And that led me to doing my master's looking at this topic, like how partisanship impacts perceptions of misinformation because I saw very early in the literature that people don't act based on their belief systems per se, but most often based on social norms. And the mechanism through which social norms are manipulated most effectively and easily and potentially cheaply, depending on how you're doing it, is through manipulation of algorithms that shape discourse on the information ecosystem. And the third scientist who will tell us all about this research is Michael Schwalb. Yeah, so my name is Michael Schwalb. I'm a postdoctoral fellow in the psychology department at Stanford University, and my research looks at the cognitive processes in polarization, misinformation, and economic hardship, and interventions to change these processes. There's a fourth scientist who also worked on this paper, psychologist Jeffrey Cohen, but I just didn't have time to add him to the interviews. We will get into the research and their takeaways in just a moment. But first, I wanted to know how this became a topic that Samuel Woolley, Katie Joseph, and Michael Schwab wanted to study. Katie is brilliant. She worked with me first at Institute for the Future and then at University of Texas as a researcher on my team, as the research lead on my research team. And this was conceived of as originally her master's thesis at Stanford. She was really interested to know whether or not Ruth actually mattered when people were reading the news. We all have deep experience with propaganda. Every single person, that's part of the reason I've loved studying it, is because a lot of my work actually, some of it was quantitative and survey-based, But a lot of it was just interviewing all different types of people and hearing people's perspective. And I always learned something new because everyone's being targeted by all these different types of messages that's shaping our behavior, myself included. And so I just wanted to know. I wanted to talk to people about, like, why do you believe that? Or, like, why are you creating, you know, AI propaganda tool? Or why are you, you know, mostly it has to do with money. at the end of the day, a lot of the manipulation of the information ecosystem, people are getting paid to do it or they're getting compensated in some way through non-monetary means. And it's just really interesting to study all the different levers that shape our behavior and social norms at scale without us even realizing that we're being shaped. Yeah. So there were two motivations or inspirations for the paper. The first was actually a colloquium talk that we saw about misinformation where the speaker claimed that politics didn't really matter that much in determining whether people believed fake news and that the problem was more an issue of people being like cognitively lazy. I, people just weren't like engaging in, um, deep enough cognitive processing. And, uh, and the research was, was presenting, the researcher was presenting on, what was like a really big part of the field at the time. And it didn't really cohere with our priors. And the methods that the researchers were using in this talk didn't to us seem optimal. So we ran an exploratory study using the same research paradigm, but except we tried to address some of these methodological concerns in the previous research. And in our pilot, we found the opposite results, namely that politics did matter a lot for people's beliefs. And so then we conducted this pre-registered scaled up version of the study that became this paper. I also asked Woolley, Joseph, and Schwalb why this kind of research was important right now in this strange time for both politics and journalism, but also algorithms and just information exchange in general? Well, look, like, you know, mis- and disinformation have always been problems and politicians have always lied. But there are some things that are very substantively different about the nature of false information and lies in our current world. One is social media, right? And the internet more broadly. The way that we receive our information is relatively unfettered in a lot of senses. Anyone can produce content on many of these sites. And so for a long time, there was a perception, long time relatively speaking, I guess, there was a perception that these tools would be great for democracy and for learning. You could go anywhere and get whatever information you wanted. you could use social media to spread your take on things. It just didn't bear out to be that way. After the Arab Spring and Occupy Wall Street, if people can remember back to those times when everyone was celebrating social media as this massive democratic organizing force, people started to realize social media has actually been co-opted for control by a lot of different groups, and especially powerful political groups and powerful corporate actors and things. They understand how to leverage the noise factor of social media, the amplification factor, the suppression factor of social media to spread particular messages. And so we get our information much more quickly than we ever have before. We don't have very good verification processes for that information. Yet at the same time, we're being told by the powers that be that somehow this is better for free speech. And when I, you know, after 10 or 12 years of studying this stuff, I've come to the conclusion that in reality, free speech doesn't actually exist online, that there's the illusion of free speech, that there's some degree of free speech, like, yes, anyone can post anything online. But when it comes to trends, when it comes to algorithmic control, when it comes to the most sophisticated information operations and propaganda on social media, it's still the most powerful political groups and corporate groups that are able to most effectively control the message. And so the open marketplace of ideas, this thing where the best ideas are meant to rise to the top, doesn't really seem to exist online. It's just the illusion of that. And so free speech is a very useful stand-in for what actually is quite potent control of the informational environment by some very powerful people. The main concern I have is that my belief is that kind of, and many people share this, is the reason that we're so fractured right now is because of this prioritization of our information exchange that's based on the extraction of our data and attention. And we have built this surveillance system that does rival countries like China, where it has been and is continued to use to be carried out genocide against the Uyghur population and others. In the U.S., we've built a similar surveillance system, but for the sake of targeted advertising. But interestingly, in my research and my perception, but of course, I have a bias. So maybe I have a filter, but targeted advertising has been found not to be effective. It's like, it doesn't work for the user. And it doesn't work for the buyer of the ads like these businesses. There was a study that had been done in the UK. It was around, I think, 2018, 2019, 2020. And they looked at millions and millions of the most expensive targeted ads. And over 50% of the money was lost to data brokers where they're like, we don't know where that money went. And then ultimately, it only reached like, I think it was 12% of impressions. So we built this mass surveillance system where it doesn't actually even allow the ads to get to the audience that they're supposed to get to. And now with the expansive use of generative AI, more and more and more of these platforms, which are already dominated by fake accounts, are now uniquely persuasive fake accounts that are populated by generated content. So companies are, it's very, very hard to detect them because it's not like one image is being used across hundreds of thousands of accounts. And it goes against the company's bottom lines to detect all these fake accounts because they make all their money. One of the main metrics is daily active users and AI generated accounts are also very active. And so this whole infrastructure we built around surveillance targeted ads is built on a lie and it only benefits the companies that are brokering that, which is how they became some of the top companies in terms of revenue in the whole world is because they're selling this lie that targeted advertising works, even though it's increasingly not working. And so we've built this mass surveillance system. And now we do see that there is this converging that many people had forewarned because we've seen it in other countries where it's like the technology is also merging now with the nation state in some elements. And there's been open acknowledgement that they want to build more comprehensive surveillance systems. And so I'm very concerned because when we look at capturing of information ecosystems throughout history, there wasn't this level of comprehensive surveillance in terms of like all of your correspondence, all of your location data, all of your internet search history. It's like before you may have been able to like hide a book or like, you know, hide letters someplace. I mean, I think of the many changes we have faced in society over the past half century or at least the last couple decades, I think two of the biggest ones, the democratization of reporting, i.e. the rise of the internet. And the second is the huge increase in affective polarization. So ideologically, we might not be that polarized. There's actually a lot of debate in the field whether ideological polarization has risen, but there is consensus that we have had a really big increase in affective polarization, i.e. the degree to which people feel divided and antagonistic towards one another along political lines. I mean, so much so that at this point, and this is research from Shanto Iyengar over at Stanford, he's shown that of all the possible cleavages in society, you know, whether it's race or gender or other demographic factors, partisanship is now the number one cleavage in terms of what's dividing people, what's causing animosity. So I think these are two really big changes, really big phenomena in society. And I was really interested in research on how these two phenomena, in a sense, interact with one another. And I also think that the loss of a shared sense of reality, or I forget who said that someone called this the tragedy of the epistemic comments. I think the loss of that shared reality with other people, I think it makes connection across partisan lines just like a lot harder. And I think that's a really, I think it's a critical issue for tons of reasons. Okay. Let's get into the actual study, how it was done and what they found. This may come as a surprise, but heading into this, there was some debate in psychology among people who researched this sort of thing as to whether the truth eventually wins when it comes to disinformation and propaganda. Here's Michael Schwab again. So the existing research up to now found that the effect of headline truth was about four times greater than the effect of headline political concordance. Or said another way, some of the claims that were made was that, in a sense, truth trumps politics. And so in our hypotheses, based on our pilot and based on our priors, we predicted the opposite. Namely, we predicted that we'd find a significant effect of headline political concordance, i.e. partisan bias in participants' beliefs of and reported likelihood of sharing partisan news headlines. You know, what matters more in believing and sharing political news, the truth or its concordance with our own political views? We also sought to unpack the effect of political concordance or partisan bias by seeing was it stronger for true or fake headlines? Or said another way, to explore which was stronger, the acceptance of convenient falsehoods or resistance to inconvenient truths. What we did was we went out and talked to people who were census-matched online adults. They gathered more than 1,000 proponents and opponents of U.S. presidential candidate Donald Trump, and they made sure as a whole they were a representative sample of the demographics of the United States population based on the census. And they did all of this before the election. We showed them headlines ahead of the 2020 election that were both fake and real. And we wanted to figure out whether or not they would understand that the headline was true, regardless of their political affiliation. These participants, they were told they'd be taking a survey about news headlines to measure their recall of those headlines. and the researchers did this to avoid priming these participants when it came to things like accuracy or their ideology or their identities. So whereas past research at the start of the studies would say we're interested in seeing how accurate you rate each headline, we instead used a cover story that the study was about recall. It was about memory. So we told them up front, you know, this is a study about memory. Would you be willing to commit to not, you know, writing down the names or using any memory devices and just giving it your full attention. We framed it as kind of a reading comprehension, social media study, memory study, as opposed to an evaluation of accuracy. So people were tipped off that they were supposed to think that it was accurate. They then presented these people with fake political headlines and real political headlines. Each participant read 16 headlines in all, one at a time, in random order. And after they saw each of the 16 headlines individually, the headline would come up, they would then rate the headline not just on how likely they thought it was the events in the headline were to be true or how likely they would to share it. They also indicated how interesting the headline was, how often they see headlines like that, or how they would respond to the headline with like a bunch of different emoji and to try to give it a sense that there's other factors going on here. Half of these headlines, they were about Trump and the other half were just non-political filler headlines. And of the political headlines, half of those were positive, showing Trump in a favorable light. For instance, here's one of the headlines from the study, Donald Trump, serious contender for Nobel prize in economics. And half of these headlines were negative, they showed Trump in an unfavorable light. Like this one, Trump's former accountant says Trump is not a billionaire. So of those 16 headlines, eight were partisan and the other eights were filler headlines to try to give it a good balance so they didn't think it was just about politics. And of the eight partisan headlines, half of them were favorable towards Donald Trump and the other half were unfavorable towards Donald Trump. Okay, now we're really getting into the actual study. So here's what these participants did. For each headline they rated one to five how likely it was that the events described in this headline are true And then they rated one to five how likely they were to share the article with friends or family And the researchers they masked all of this by asking for ratings of all sorts of other stuff. Yes. So it's basically because each of these, we had these four breakdowns of the headlines, you know, like very true could be are true, but, you know, could be confusing to people, false, but, you know, could be confusing people and very false. And then we had people rate like, how likely do you think this is to be true? And we also rated how aligned is this with your political belief of being pro-Trump or anti-Trump. And it's worth noting that the true headlines in this study were very much true headlines from places like Reuters and CNN. And for each true headline, the researchers independently fact-checked them just to be extra sure they were indeed true. And then we wanted to have different degrees of fakeness. So we had one category of fake headlines that were fake, but not like outlandishly fake. And then we wanted to kind of see if there was a limit to this effect. So we also created headlines that at the time, you know, this was back in 2019 or so, to us, we're like, these are clearly outrageously fake headlines. Like, like we think this is probably a limit of the phenomena. Yeah. For both the true and fake headlines, they varied how questionably true or outlandishly fake they were. One of my favorite outlandishly fake ones was Trump beats grandmaster chess champion Magnus Carlsen. And a less outlandishly fake headline was Donald Trump killed pedestrian while driving in 1973. And as mentioned earlier, a not very outlandishly fake headline was Trump said, I don't like poor people during private meeting with business bogles. They also ran variations of this design just to make sure their methods were sound. All real headlines sometimes, different mixes of positive and negative, that sort of thing. We had different layers of serving different types of people and had people in different conditions. So like one fourth of the people were in the condition of only seeing true information. And then the rest were in seeing a mixture of the true and false headlines, as well as some non-political filler headlines, which were real viral headlines. And actually, I feel like the headlines I remember the most from that study were the non-political filler, like about this penguin traveling thousands of miles to meet this man who saved him. But so and then we had people. No, I remember from your study, the headline that Donald Trump beats Magnus Carlsen in chess. I still I'm still remembering that headline more than more than the other headlines. And that was so fun to come up with all of those different scenes like, you know, Trump dressing like the Pope Pat and Orgy. But it's just it was just so funny. I remember actually like between studies, I mean, maybe this isn't right, but I remember there was a few headlines we actually had to change over time because they became true. And so we had to alter them. And so not that that one respond that became true. They're more like, you know, on the fence. So people saw the 16 headlines, they rated them. And then afterwards they did a memory test to preserve the cover story of how many of the headlines they recalled. And then after that, they did a number of surveys around their partisanship, their degree of objectivity illusion, their extreme views of Trump. So how much did they think Trump was likely to solve peace in the Middle East or launch a nuclear war or both positive and negative kind of extreme views of Trump? Do they think Trump was a saint or do they think he was a genius or and so forth? And then and then some other, you know, other survey measures like one sided media consumption and demographics. So what did they find? We've discussed this all over the place. At the end of the day, what were the findings of the study? I think that people who are familiar with confirmation bias would initially go, well, yeah, you know, for some things. Because I think a lot of this research right now, when it makes headlines, ironically, it's about susceptibility to fake news. The idea being if you're in the tank for one side or the other, you'll be more susceptible to fake news that makes the other side look bad. And sure, yes, you found that that's a thing that is true. That is a thing that people do. But the thing that I love about this is it reveals another thing which people might not be aware of themselves. And instead of taking it away from you, I want to ask you, what did you find when it came to the way people respond to true headlines? Headlines that are true, but they don't make your side look good. You don't respond well. You know, that when that when even if a headline is true and if it doesn't jive with your with your. Especially if you're a very partisan person, especially if you have sort of like a one sided media diet that even when a headline is true, if it doesn't add up to to to jive with your beliefs that you're very unlikely to to believe it's true. which would then translate to you're unlikely to share it you're unlikely to put it on your social media yes exactly so it means that you know regardless that that if the article doesn't jive with your political beliefs and your perspective even if it's true you're not going to share it or you know put it another way that the truth is inconsequential according to this study. I mean, maybe even a little bit surprising to us at the time was this effect of political concordance was stronger than the effect of headline truth up to kind of around two times the size of the effect. Concordance in psychological terms is the perceived agreement or consistency of novel information with your existing beliefs, attitudes, assumptions, and allegiances. And so they're more influenced by the concordance of headlines over the truth. And we describe that phenomena as a concordance over truth bias. And so concordance over truth is when how people rated the veracity of the political headlines that were concordant with their either anti-Trump or pro-Trump viewpoint, and then subtracted the rating of the veracity of the true headlines, regardless of how they aligned with their political belief. And when we looked at, okay, does this, maybe that, maybe our headlines aren't fake enough. Do we still see this with the outlandishly fake headlines? We found that it held up also that participants still rated outlandishly fake concordant headlines as more likely to be true and to be more likely to share them than real headlines that were discordant with their prior views. So the listeners can't see me, but I'm like putting my head on my desk because it's so frustrating. As someone who teaches and studies journalism, um partisans were more resistant to information that was true that did not line up with their perspective of politics or of the world than they were to content that was that was extremely clearly fake you know we talked about how in this study the fake news ramped up and ramped up and ramped up, that even when the content became pretty clearly fake to you or I, perhaps, they still wouldn't share the true news over that fake content that jived with their political beliefs. And also, another interesting little wrinkle in all of this is that, maybe it's unsurprising, but I found it interesting, is that people tend to remember the fake stories much better than they do the stories. They also tested the participants' aptitude when it comes to something psychologists call cognitive reflection, a feature of human cognition we have covered quite a bit on this podcast. And they did this because some psychologists, before this study, took the position, based on research that suggests this is the case, that people don't fall for fake news because of motivated reasoning, but because they fail to think about their own thinking. Here's an actual question that psychologists sometimes use to score people on their cognitive reflection skills. If a hole is three feet around and three feet deep, how much dirt is in it? The answer is none. It's a hole. Another one is if you are running in a marathon and you pass the person in second place, what place are you in? The answer is you are now the person in second place. And here is another. In a lake, there is a patch of lily pads. Every day, the patch doubles in size. It takes 48 days for the patch to cover the entire lake. How long would it take for the patch to cover half of the lake? The answer here is if every day it doubles in size, then the day before it covered the entire lake would be the day it was half the size. So if it covered the whole lake on day 48, it covered half the lake on day 47. So yes, before the study we are discussing in this episode, When it comes to fake news, some psychologists were of the opinion that it all boiled down to lazy thinking and that people were more likely to believe and share disinformation when they felt most safe and most permitted and most encouraged to be lazy in their thinking, to just trust their intuition. But that was not what the researchers in this study found. When it came to how people rated these headlines, they found that it wasn't partisanship, education, or cognitive reflection skills that most predicted how people would respond. You looked at education levels, you looked at analytic reasoning ability, and you looked at some sort of like some measures of partisanship. What seemed to be the predictors of this bias? What was the most predictive of these things? Yeah, it's a great question. So it was robust across low, medium, and high education levels. We found it persisted even amongst people with advanced degrees. So education did not interact with or buffer people from this concordance over truth bias. When we looked at analytical reasoning, we similarly found that concordance over truth bias was similarly robust amongst people who were low in analytical reasoning and people who were high in analytical reasoning or depth of cognitive processing. It was pretty robust across demographic factors. And when we looked at ideology, we found it to be prevalent across both sides of the political aisle. So people have different ideologies, but they share a tendency to be more influenced by political concordance over factual accuracy, at least in our data. The top three predictors of those top three, the number one was the objectivity illusion, the degree to which people believed in the objectivity and lack of bias of their own political side relative to the other side. The people who, in a sense, believed that they were the most objective and least biased when it came to their partisan bias, they were the most biased and least objective. The key to the paper finding is that people who have strong objectivity illusion and strong one-sided media consumption show this concordance over truth bias the strongest. And it's that cycle that you're talking about where it's like, even when you do encounter information that is factual but not congruent with that belief that you're developing, you dismiss it more than the false information that confirms your belief. So it's something I think this study shows a lot of what we're seeing generally in the information ecosystem. I think a lot of the literature is kind of just focused on fake news. And that's just that's like half the picture. There's also a whole other area of research, which is really important, of how are people reacting to true news? Are they resisting true news that's discordant with their prior beliefs? And if so, that's also really important for a shared reality, a sense of epistemic comments. The political concordance of news headlines determined people's belief in and intention to share news stories more than the actual truth of the headlines encouraged them to believe and share them. participants were more likely to reject true headlines that seemed discordant with their political beliefs than they were to accept false headlines that seemed discordant with their political beliefs. In other words, you are more likely to be skeptical of true news that makes your side look bad than you are to believe fake news that makes your side look good. I think it makes a lot of sense because, you know, the strongest bias, cognitive bias we all have is, you know, confirmation and truth bias. Like we believe ourselves just with motivated reasoning, as you referenced, like we believe ourselves to be consistent people. And we also, we, everything in our world, we're like taking in with that filter. Because also, if you don't have a strong truth bias, that's a not a healthy way to be moving through the world of assuming that things are true. And assuming things that are confirming your bias is a very, you know, you don't have to invest as much cognitive energy. So these biases have a reason why we hold them so prevalently in terms of like, if we believe that what someone's telling us is true, that helps us build relationships and communicate with people and share information and resources much more effectively because we're automatically building in that trust. And then if we have this confirmation bias, that enables us to not spend so much cognitive energy questioning everything. And it's really interesting because obviously people who have more training and who believe themselves to be more educated, in my opinion, would say, okay, yeah, I have more evidence to draw from to confirm what I believe. And of course, there's other studies that show nudges where it's like when people are told, are you going to put money down on what you're saying is accurate? Or are you going to put money down on sharing this thing if you think it's true or false? Then people do take even more critical step. But I think just not providing these nudges, it's natural for everybody to rely on those biases. and this is a bit of an aside but something i've always thought about too over the last few years of studying this is previously it used to be like you'd be on a panel and everybody would be like we need to encourage critical thinking like critical thinking is the number one thing and that's part of the you know cognitive reflection is like are you going to just jump to the exact conclusion or you're going to think and realize what the the accurate answer is but also when i've we've spent time with the people who have different conspiratorial beliefs like they're really critically thinking like they're going jumping through hoops to think even more critically or like dive even deeper into what they believe to be like the true sources of information and they they actually kind of use information rigorous information gathering techniques but it's in a silo of information that that not necessarily based in reality or scientific method so it just very interesting when we emphasize this critical thinking i think there always this kind of like extreme over jump that can happen just for people who aren trying to convince themselves of a conspiracy, but are just seeing politically congruent information and want to believe it. So, given all of this, what are the big takeaways? I asked our scientists that very question, and here's what they said. We focus primarily on the issue being a supply-side issue of the supply of fake news. And clearly, that's important, and that's a big part of the picture. And if we didn't have as much fake news, people would be believing in it less. They might still come to their own conspiracy theories. But the other big part of this picture is that people are disbelieving discord in news. They're disbelieving inconvenient truths that don't cohere with their prior beliefs or that they're not motivated to believe. And I think what we've seen in the news cycle for the last four to eight years or so is that's actually kind of what's really in the headlines is people not believing certain truths. So I think it's really important as we think about how to manage this phenomena going forward that it's not just about focusing on curtailing or fact-checking fake news. It's also on how do we shift – how do we think about education? How do we think about intellectual humility in the sense of not just about being critical of the news, but also being critical of our own minds? It makes sense that people are like, oh, I've spent years learning about this topic that I am an expert, even if I don't have like a degree or like established expertise. But the interesting thing around a quote unquote, like kind of established pathways of expertise, ideally they have checks and balances from people who have other perspectives. So it's like, you know, even in this paper, we're in discourse with other researchers who have come to other conclusions and are going back and forth about like, OK, we approached it with this scientific method and we concede that maybe this that there are faults with this method that maybe were correct in this other study. but this other study found it this way. So it's kind of like having that discourse with people with different perspectives. And even in the academic setting, some would say that this is sometimes not helpful, but people build their careers off of finding these niches and being in contrast to other academics who are studying these niches. And maybe that leads to even another silo that's a meta silo, but at least there's these divergent viewpoints that are in conversation where with how the information ecosystem is designed today, it's all around extraction of data, attention, manipulation of behavior. Like that's how we all know this. That's a priority of how we're interacting with information. And so they just want to keep serving you what you're going to keep consuming, which is going to be what's congruent with you. So it's not that even people want to become one-sided. People don't know what they don't know. And unfortunately, that's the design of how we're interacting with information. and chat gpt you don't even know that you're maybe not even finding out about or you don't even know that you're reading some kind of like distilled homogenization of like thousands of data sets that also are drawn potentially from just the free information on the internet which is we all know not the best and best and brightest of human wisdom and it's also prioritizing a very specific kind of demographic and historical context and like user base as opposed to all the wisdom that exists that's diverse and rich on our planet um intergenerationally as well and and it does also challenge some of our notions about the efficaciousness of of certain types of media literacy campaigns and interventions in this space too like we weren't studying particular interventions but you know uh one of the big questions people always ask is like well what are the solutions to the problems of disinformation and propaganda online and if it's true that the people who think they're the least likely to fall for these kinds of this kind of stuff are actually some of the most likely, then what does that mean for, you know, fact-checking initiatives? What does that mean for how we train people? Does it mean that we need to inject a bit of humility into the training? How would that work? How would we do that? How do we make people who are already sort of entrenched in their perspective about not just themselves politically, but also their political party, how do we work to change their behavior? And that is, I think, the million-dollar question that sort of arises from this research. I thought, you know, there's this paper, like, back in 1994 by Wilson and Breck that summarizes this really well. They're like, people need to be made aware of the existence of the bias, but they also crucially have to accept that they're vulnerable to the bias. And these are, like, their five steps. They're, like, step three is they have to be motivated to correct the bias. Like if you are aware you've got the bias and you, you, you accept that, um, you're vulnerable to it, but you're just like not motivated to do anything about it. That also might not be enough. And they have to grasp the magnitude, the direction of how it impacts them and have a strategy to overcome it. So it's like a lot, but, but having education around understanding that we are, our introspections can be fallible and it's good to have intellectual humility. I think it's like an important step for being able to be less susceptible to concordance over truth bias, less susceptible to partisan bias. I would say my motto is I'm always learning. And I, you know, to the point of intellectual humility in this paper, I do try to, you know, say, I don't know when I don't know and be open to maybe, but as a concept, because I think I have a friend who used to study this. This is an aside, but she was like, you know, American discourse is very much about right or wrong. Like even the language that we use is like, I agree, I disagree. And we don't bring enough into our language around maybe, and maybe opens up more potential for discourse and connection. And I've learned so much from people who have very different viewpoints from me. And like we are saying like we resonate with the reasons why people consume conspiracies, even if I don't agree with their ultimate outcome. But right now, the reasons that I do feel I have elements of optimism are at root, I have a belief, which is, I believe that all people want to give love and be loved. And that at root, like that drives people to, they actually, when you look at polling, like a majority of people agree on a majority of things around education, elements of healthcare, elements of gun control, elements of housing, elements of environmental protection, you know? And so it's like, there actually is this kind of shared belief system that originates as cliche as it is from the power of love. And, but you can just see it in everyone's daily life, in our own life. It's like, you know, I don't want to go out and be mean to someone or not believe in them. And And a lot of people don't want to be in that perspective either. And it also doesn't have a positive feedback to people's own enjoying of their life. So that's one of the reasons I'm optimistic is that core belief I have in human behavior. Before, if you were reading a book or you were in a conversation or you were taking a class or whatever, you kind of knew, oh, I'm going to consume information on this topic for this amount of time. And it's from this source. And so you had just more consent around what you were consuming and how it was potentially integrating into your life. And you could choose more. But now because so much of information is just fed through a feed as opposed to something someone actively searches for or it's fed to someone by an influencer that they follow, which surprise, surprise, influencers are influenced by a lot of factors themselves. You know, it's not that they are there. Their primary role is to continue to hold attention or to gain money from the attention that they've gained. And so that leaves them very vulnerable to even being influenced by coordinated groups from the bottom up or like top down, you know, requests and, you know, compensation. And so it's just really interesting that we've just been handing over the reins increasingly to these other elements that are causing us to consume information and shape our belief systems and shape the store of knowledge with which we are then filtering the future knowledge. And we don't even know what we're consuming. We don't receive like a digest of being like, oh, on Instagram, you consumed 25 minutes of cat videos that had a positive balance and 45 minutes of weightlifting videos, you know. and then this had this outcome on your behavior you know so we don't have that full loop or that consent or that knowledge about the information we're sharing we're just like allowing our whole perceptions of the world to be shaped beyond our our hands you know and unfortunately not just with political information it can offer you concordance around you know i i ran into some random person on the street and they were telling me about how they had been looking at nausea anti-nausea content on TikTok and it took them down a whole like bulimia pathway. And I was just like, oh no, I'm so sorry. But, you know, we do see that with a lot of different social media platforms where it's like, yeah, any type of concordance or that hint of where they think you're going, it could push you into a behavior change that is not what you actually want to and can derail elements of your life. Yeah. There's just been a bait and switch, right? Like, you know, the, the original infrastructure of the internet, the, the Wikipedia's of the world, and even the early social medias of this world no longer exist. And we have people like these billionaires telling us that it's an us problem. And then we have research from folks like our group and others that suggests that we're not very well equipped to deal with this problem on our own. Yes. And again, I'm always scared. Like, this is, I see, I told you, you can't trust the media. I'm like, oh, God, this means you're not going to this means you're going to think everything by the New York Times is wrong and everything. But like you're only going to listen to clown penis dot fart. Look, I think that people have to learn to live with dialectics, right? Like to say, yes, it's true that there is a corporate media control. It's also true, and it's also true that there are a lot of good journalists out there still working really hard to try to produce high-quality information. It's just that they're working against a tide, which is the tide of social media and the internet and the noise economy that is basically impossible to combat without guardrails and without culpability. For me, based upon my background and what I've studied for a very long time, while the findings of the study are super concerning, they're also somewhat edifying in the sense that they suggest that we do need to work on problems associated with polarization, associated with people's political identity, and that we do need to unpack a little bit of like the extent to which that itself has been cemented or whether or not it's it's mutable there's a a friend of mine britain heller who you know one time she she quoted the art of war it means she said we have to build a golden bridge over which our enemies can retreat that's you know that's from the art of war it's this idea that that if we just supply people with true information, this study suggests that that's not enough. But it doesn't mean that that's something we shouldn't do. It means that that's not enough on its own. We also have to address people's political identities, their cultural identities, their belief systems. And so the solution to this problem of disinformation and misinformation, which has been so spoken about in the media, is not a simple solution. It's a solution that's going to take time. It's a solution that's going to require things like thoughtful media literacy campaigns and informational literacy campaigns that are not just built for a one-size-fits-all audience, but that are bespoke to particular communities that understand things in a particular way. And potentially, and this isn't in this research, but one of my beliefs that's grown out of this is that you've got to have members of those communities leading those kinds of initiatives. There's got to be buy-in from them, that it can't be academics or fancy journalists or anyone else that helicopters in to do this work, because the reality is that the research shows that oftentimes that backfires in and of itself, especially amongst conspiracy theorists and hardcore partisans, because they're much less likely to have institutional trust. And part of that's compassion, right? Part of that's understanding affect and compassion and other things like that. It's relational. And that's not very sexy to scientists all the time. That is it for this episode of the You Are Not So Smart podcast. I am your host and editor and reporter and writer and everything else for the You Are Not So Smart podcast. If you enjoy this show, if you've gotten anything out of it, your support is going to help keep it going in the future. You can support the show at Patreon, patreon.com slash you are not so smart. And there's a link in the show notes. There's a link to everything that we talked about in this episode in the show notes right there in your podcast player. and also over at youarenotsosmart.com. But yes, this has always been a one-person operation and your support is greatly appreciated. You can find my book, How Minds Change, wherever they ship books in trucks, wherever they sell them, wherever they put them on shelves. And you can find details about that at davidmccraney.com. And also links right there in your podcast player. I'm scheduling my lecture appearances right now. If you'd like me to come speak at your institution, academic or otherwise, just go to davidmcraney.com, click on that part of the website, or email me, davidmcraney at gmail.com. For all the past episodes of this podcast, head to Apple Podcasts, Spotify, and Amazon Music, Audible, all those places. YouAreNotSoSmart.com also has all the past episodes. Follow me on Twitter and threads and Instagram at David McCraney. I'm also on Blue Sky at David McCraney, Blue Sky, all that stuff. Follow the show at NotSmartBlog over on Twitter. We're also on Facebook at slash YouAreNotSoSmart. The opening music is Clash by Caravan Palace. And if you really, really want to support the show, just tell someone or perhaps everyone you know about an episode that really, really landed for you. and check back in about two weeks for a fresh new episode. Thank you.