3 Takeaways™

Your Brain, For Sale: The Hidden Ways AI Can Manipulate You with Cass Sunstein (#273)

24 min
Oct 28, 20256 months ago
Listen to Episode
Summary

Cass Sunstein explores how AI can manipulate human behavior by exploiting our weaknesses, fears, and cognitive biases. The episode examines specific manipulation tactics—from anchoring and scarcity to social proof and loss aversion—and discusses the urgent need for legal protections including a "right not to be manipulated" in the AI era.

Insights
  • AI's unique threat lies in its ability to identify and exploit individual psychological weaknesses at scale, from short-term thinking to unrealistic optimism, in ways humans cannot match
  • Manipulation differs from deception; it compromises deliberative choice through hidden cognitive tricks that prevent informed consent, requiring a new legal framework to address
  • Social media platforms function as 'product traps' where users stay not from preference but from fear of exclusion, representing a new frontier of AI-driven manipulation
  • Regulatory solutions must balance innovation with protection by making exit from services as easy as entry and ensuring transparency on fees and automatic charges
  • The distinction between helpful personalization and harmful manipulation hinges on whether users maintain genuine autonomy in their choices
Trends
Emergence of AI-driven behavioral manipulation as a regulatory priority for governments and consumer protection agenciesGrowing recognition of 'sludge' (administrative burden) as a manipulation tactic, driving policy changes toward friction reduction in service cancellationShift toward defining consumer rights around autonomy and deliberative choice rather than just deception and fraudRise of collective action frameworks to counter product traps, particularly among younger users on social platformsIncreased focus on biometric and behavioral data collection as enabling factor for hyper-personalized manipulation at scaleMovement toward transparency requirements for algorithmic influence, particularly around emotional manipulation and content curationExpansion of behavioral economics principles into legal and regulatory frameworks for AI governance
Topics
AI Manipulation and Consumer AutonomyBehavioral Economics and Decision-MakingCognitive Biases in AI SystemsAlgorithmic Influence and Emotional ManipulationConsumer Protection and AI RegulationRight Not to Be ManipulatedSocial Media Product TrapsAnchoring Effect and Pricing StrategyScarcity Principle in MarketingSocial Proof and Authority BiasLoss Aversion and Framing EffectsReciprocity and Sales TacticsCommitment and Consistency StrategiesDecoy Effect in Consumer ChoiceAdministrative Burden and Service Cancellation
Companies
Facebook
Conducted emotional contagion experiment showing ability to induce positive/negative emotions through algorithmic con...
Instagram
Discussed as a product trap platform where users remain due to fear of missing out rather than genuine preference
TikTok
Identified as a product trap platform leveraging social proof and FOMO to manipulate user engagement and retention
Amazon
Mentioned as platform where Sunstein's book on manipulation is available, illustrating e-commerce distribution
People
Cass Sunstein
Leading behavioral science expert discussing AI manipulation tactics and advocating for consumer protection rights
Lynne Toman
Podcast host conducting in-depth interview on AI manipulation and behavioral economics
Richard Thaler
Co-authored 'Nudge' with Sunstein, foundational work on behavioral economics and decision-making
Quotes
"Manipulation involves getting people through forms of influence to make choices that don't reflect their own capacity for deliberative choice."
Cass Sunstein
"AI can now learn an enormous amount about us, our tastes, our habits, even our biases. And soon it will have even more knowledge."
Cass Sunstein
"We need a right not to be manipulated. We have a right not to be deceived. We have a right not to be defrauded. Right now, we need a right not to be manipulated."
Cass Sunstein
"Social media platforms are often a product trap, where they're on TikTok or Instagram like a lot because they think other people are too. And that's a form of manipulation."
Cass Sunstein
"Manipulation is bad because it is an insult to people's autonomy or freedom, because it is like deception and lying, it prevents people from making reflective choices."
Cass Sunstein
Full Transcript
AI can learn our tastes, our fears, our biases, and use that knowledge to steer what we buy, what we believe, even how we feel. Sometimes that's helpful, but sometimes it's dangerous. So where's the line? And how do we protect free will in a world where we may be manipulated without even realizing it? Hi everyone, I'm Lynne Toman and this is Three Takeaways. On Three Takeaways I talk with some of the world's best thinkers, business leaders, writers, politicians, newsmakers, and scientists. Each episode ends with three key takeaways to help us understand the world and maybe even ourselves a little better. Today I'm excited to be with Cass Sunstein. Cass is one of the world's most influential leading scholars, as well as a leading thinker on behavioral science and how policies and laws shape human behavior. He served in the Obama administration as administrator of the White House Office of Information and Regulatory Affairs, and he's advised governments around the world on regulation, law, and behavioral science. Cass has written dozens of books, including Nudge co-authored with Nobel laureate Richard Thaler, which transformed how we think about decision making and public policy. His latest book, Manipulation, explores how our choices can be quietly shaped increasingly by artificial intelligence that learns more about us than we realize. Cass, welcome back to Three Takeaways. It is always a pleasure to be with you. Thank you. A great pleasure to be with you. In your book, Manipulation, you write that dystopias of the future include two kinds of human slavery, one built on fear of pain, the other on the appeal of pleasure. Let's start with fear. How can AI undermine free will through fear? It can make you really scared, AI can, that things are going to be terrible unless you hand over your money or your time. So AI might make you think that your economic situation is dire and you need something, or it might think that your health is at risk and you need to change your behavior, it might make you think that things are unsafe. Now, if the situation is dire or unsafe, it's kind of good to know that, but AI can manipulate you into thinking things are worse than they actually are. And what could a dystopia of pleasure look like? Dystopia of pleasure sounds a little like an oxymoron, so if we're delighted and smiling and everything's going great, that sounds pretty good. But if it's the case that people are being diverted, let's say from things that are meaningful to a world of videos that are producing smiles or smirks, it may be that your meaning in your life has been atrophied and what you're doing now is staring at things in a way that is making your life kind of useless and a little purposeless. AI can now learn an enormous amount about us, our tastes, our habits, even our biases. And soon it will have even more knowledge. What additional knowledge will AI have and how does that knowledge open the door to even greater and more subtle manipulation? We need an account of what manipulation is, so let's say manipulation involves getting people through forms of influence to make choices that don't reflect their own capacity for deliberative choice. So if I decide I want to get a new book on manipulation, I hope I'm not being manipulated. If I am influenced to think that if I don't get that new book, then my life is going to fall in the toilet, then I'm probably being manipulated. So what AI is in a unique capacity to do and now algorithms right now in human history and it's getting more extreme is to know what people's weaknesses are. So it may know that certain people lack information, let's say about what's an economically sensible choice or certain people are very focused on the short term and they can be manipulated to give up a lot of money like tomorrow in return for a good that produces a little bit of pleasure today or may know AI may know that certain people are unrealistically optimistic. They think that plans are going to go just beautifully even when they won't and they can lead you to buy a product that's going to break on day three. And this ability to get access to people's weaknesses, that is kind of a terrain for manipulation through AI or through algorithms. And AI will basically have access through our phones to all of our conversations, all of our contacts, everything we look up on the internet, everything we read, as well as now biometric data increasingly, our heart rate, how long we look at something. What will all of that additional data enable AI to do? Well, we should note that there's a good side of this. So if AI knows that what you're really interested in are books about behavioral economics and laboratory retrievers, and you're not really interested in books about particle physics or about Chihuahuas, then you can get information that is relevant to your interests or maybe offerings that are connected with your both side life. So there's a good side to it. If AI knows that certain people, let's say, have self-control problems, that they are addictive personalities or that they are reckless purchasers, then AI can really get resources from them and maybe put their economic situation into a very bad state. If AI knows that certain people are very parsimonious and they don't really want to spend much money and they are very careful, AI might know that people like that are vulnerable only to this, and then it can work on you. If you are being subject to some form of trickery that gets you to have your weaknesses exploited and you're not making a reflective choice, then we're in the domain of the manipulative. Whether this is something that we want regulation for depends a lot on how markets are working themselves out and how both companies and people who use products are reacting to the relevant risks. About 10 years ago, Facebook, which you talk about in your book, ran an experiment to see if it could influence users' emotions. What did the company do and what did it find? It found that emotions are not only contagious, which we know. If you're surrounded by grumpy people, the chance that you will grow grumpy increases. If you're surrounded by happy, fun people, you're probably going to be happier and have more fun. Facebook can induce positive or negative emotions through posts. It would be regrettable if some people's, it's unfortunately true, some people's principal social relationships are online. If your principal social relations aren't online, you can be rendered Facebook-found happier or sadder just by virtue of what Facebook is showing you. Since Facebook has a capacity to put happier or sadder posts on your news feed, it can induce emotional states. Facebook got a lot of pushback for that. That was desirable that there was that pushback. Facebook, I think, wasn't doing anything malevolent there. I was just trying to learn. But the idea that a company can have some authority or people's emotional states, that is troubling with a capital T. You asked AI to draft a step-by-step manipulative guide to push someone toward buying an expensive car. The results, to me, were scary because, as you know, the same strategies could be used to sell almost anything or even recruit someone to a radical cause. Let's walk through some of these tactics, starting with the anchoring effect. What is it? How does it work? And can you give an example? Well, I'll tell you, you know, and your listeners that if you'd like to buy my book, I have copies that you can get for $45. And because, you know, I know you have worked together before and I love your program and love your listeners, I'll sell it to you for $39.95. See what I did there? I just anchored you on the $45. It doesn't cost $45. It doesn't cost $39.95. But I started with $45 and that anchored people on thinking, OK, it's a $45 book. $39.95 sounds pretty good. So anchoring is an initial number from which people adjust. Real estate brokers sometimes do this. Sometimes they're very self-conscious. So they'll say, there's a house that's on sale for $400,000. And let's just suppose it's an area where the particular house the real estate seller knows it's going to go for significantly less. But starting with that initial number inflates people's willingness to pay. So anchors are super powerful. They work in negotiations. They work in divorce settlements. They are a coin of a realm. And AI could completely anchor people. What refrigerator are you going to get? There are refrigerators available in a store near you. And they cost X. But there's a discount. And let's just stipulate that AI is inflating the cost and the initial starting price. And that's a form of manipulation. Another manipulation strategy is the scarcity principle. Can you talk about that? I don't know if you saw, but my manipulation book, I don't know whether it's, you know, I've just gotten lucky with the demand or something else that the availability is extremely restricted. And I'm pleased to say what you probably know, Lin, which that there are copies available on Amazon, but I'm not sure they're going to be available tomorrow. I'm hoping the publishers going to be speedy and republishing, but you never know with paper shortages. So what I just did was scarcity. And for me, if I learned that some food that my dogs really like is hard to get, I'm probably going to go to the store. How about social proof? What is it and why is it so powerful? Well, there's a book that recently came out called Boundary Rationality. I'm privileged to be second author. First is an economist in about a couple of years ago. It's a long book, pretty technical book. Okay, I'll play it straight. I won't do this. Any foolishness here. One thing we did was we asked people who were really good at behavioral economics to say that they like the book and we didn't do any tricks to get them to do it. We just said, my you. So we have some really excellent people saying they like the book. That's social proof. So if you are, let's say, the sibling or the parent of a young tennis player and the parent of a young tennis player, my young tennis playing son is going to be applying to colleges pretty soon. If Roger Federer or Rafa Nadal would write a little note saying, I've rarely seen such a promise saying young tennis player as my son, that would be social proof. That would also be a miracle. He's good, but we don't know those. How about authority bias? What is it and can you give an example? If you have an authority who is said to like something a lot or to think that you should do something, it would be rational to be influenced by that. But sometimes the influence outweighs what it is rational to do. So sometimes it's over weighted the judgment of an authority. How does reciprocity drive behavior? Reciprocity often involves people say, I'll do you favor and then people feel obliged. Sellers are often very smart at that. So they say, this is what I'm going to do for you. Maybe we'll tell you a little story, a great story, I think, which is when I bought a car a few years ago, it was on a Saturday. And as one does, I was negotiating for the car and the price offered was higher than I had hoped. And I said, can you do a little better? And he went back to talk to his boss and then came back and he said to me, cast, of course, they're very good at using your first name. He said, cast, I talked to my boss, it's Saturday. We're not going to sell any cars. Saturday is a very tough day. So we're going to give you a great deal. Here you go. And I thought, great, he's doing something nice for me. Big deal. And I'll do something nice for him. Say yes. So there's a little reciprocity there. And then an hour later when I was driving the car off, I said, thank you so much. I'm glad to be able to do this on a day when you don't sell any cars. And he forgot what he said to me. And he looked at me, he looked at me and said, what are you talking about? Saturday, that's the best day for car sales. This is our big day. So he lied to me when he said, I'm going to give you a good deal because it's a Saturday. He used reciprocity. And he thought as he did deal for me, then I would say yes to him. And he forgot what he had said, which is we don't sell any cars on Saturday. It was a good line. It made me think I was getting a good deal. But then when I drove off, he said, well, it was truthful, which is Saturday's our big sales day. So he was smart. I was manipulated. Cass, what's the principle of commitment and consistency? So if you commit to, let's say, vote to a friend who wants you to vote, say yes, I'm going to vote, then the likelihood that you're going to vote jumps. And AI can certainly invite a commitment. And then you'll act consistently with your commitment. So to get people to commit to do something like, I'm going to drink no diet soda for the next week. I actually did that a few years ago. And I haven't had any diet soda for the last years because of the initial commitment that I wasn't going to drink it for the next week. That's often a very effective behavioral strategy to induce a commitment. How about loss of version? How does that influence decision making? If people are told, if you use energy conservation strategies, you'll save $200 in the next 12 months. The likelihood they will use energy conservation increases. But not as much as if people are told, if you don't use energy conservation strategies, you'll lose $200 for the next 12 months. There are identical sentences in terms of their meaning. One is framed as a loss. The other is framed as a gain. People really don't like losses. People tend to dislike a loss twice as much as they like a corresponding gain. Sometimes it's just semantic. It's just a re-description of the phenomenon. If something's described as loss, on average, people are going to be concerned and take action to prevent it. And finally, what's the decoy effect? Let's suppose you have two choices at a restaurant, an expensive mid-sized piece of cake and an inexpensive small piece of cake. Let's suppose that people buy, on average, the less expensive small piece of cake. And let's suppose the restaurant thinks we want to make a little more money. We want people buying the mid-size where we get more profit. If you have a decoy, that is a big piece of cake, like really big piece of cake that no one's going to want. Super expensive and it's going to do terrible things for your waistline. If you introduce the decoy, people flip from the small to the mid-size. So the introduction of a decoy can often flip people who would choose A over B. Once they see a C, they'll choose B over A. Cass, what happens when AI can use all these strategies against us? Well, if agile companies are using AI cleverly, we can be manipulated to lose money and time. As a legal scholar, what consumer protections do you believe that we should have against manipulation in this new AI era? The rallying cry is that we need a right not to be manipulated. We have a right not to be deceived. We have a right not to be defrauded. Right now, we need a right not to be manipulated. Now, specifying that right is a work in progress. Probably it's best to work from egregious cases of manipulation. But the most extreme ones are when people are subject to hidden terms or to cognitive tricks. So they are parting with something that matters to them. Their money, their time without really consenting. And that means we need to specify what that looks like. Calling for right to not be manipulated isn't standard, but we're kind of getting there. And the US government over recent years has verged on that, saying, for example, that if there's a fee that you haven't gotten clarity on, sometimes they're described as junk fees, you don't have to pay it. It has to be something that you have clarity you're paying. You mentioned protection against cognitive tricks. Can you give some examples? One idea would be to say that you are going to automatically pay monthly fees if you agree to pay a fee now. If the monthly fee is automatic and not really in your face, you might click on it, even though the consequence for you is one you would not welcome and would not agree to if you had clarity on it. So what is being done here is using limited attention against people to default them into an economic arrangement that they would not have accepted if they had clarity about it. Here's another one where you agree to an economic relationship where entry into the relationship, just as one click, and exit from it means you have to go to a place for standing along line, talk to seven people, then make a phone call, then do 20 push-ups, and then recite the last names of your great, great, great, great, great grandparents. That's a mild exaggeration of easy and extremely hard extrication. And that works on the fact that people have an aversion to navigating, let's call it sludge, which is administrative burdens, and to discounting the future. So the future horror of extrication isn't something that people attend a whole lot to. And our government at times has said things should be as easy to extricate yourself from as they are to enter into. Now, there's things for which that wouldn't be sensible, but for economic transactions with, let's say, magazines or banks, that's a pretty good start. And Cass, I should say thank you for your work in government to reduce sludge. Thank you for that. Before I ask for the three takeaways on manipulation that you would like to leave the audience with today, is there anything else you'd like to mention that you have not already talked about? I'd emphasize that one form of manipulation is sometimes described as a product trap, where people enter into a relationship, let's say, with a company, because they think other people are doing it too, and then they are fearful of missing out and they'll stay in, not because they like it, but because they think they'll be excluded from something. For young people and not so young people, social media platforms are often a product trap, where they're on TikTok or Instagram like a lot because they think other people are too. And that's a form of manipulation by Instagram and TikTok. And there's a lot of work being done now to try to find ways to spring the trap by enabling people to work collectively to say, we're all going to be off. At least we're all going to be off between 9 p.m. and 8 a.m. And that is a new frontier of manipulation. Cass, what are the three takeaways you'd like to leave the audience with today? Takeaway number one is that manipulation is bad because it is an insult to people's autonomy or freedom, because it is like deception and lying, it prevents people from making reflective choices. The second takeaway is that manipulation consists, it should be defined as, a form of trickery that compromises and fails to respect people's capacity for deliberative choice. Now, if we understand it that way, then we can spot manipulation in the family at work and online. If you think that that form of trickery is always bad, you probably lack a sense of humor. It's sometimes a very fun thing, but in egregious cases where it's harmful and it takes things from people without their consent, then it's bad. The last of the three takeaways is that it's time today to start to create a right not to be manipulated. Cass, thank you. It is always a pleasure to be with you. I very much enjoyed your book, Manipulation. Thank you. Great pleasure for me. If you're enjoying the podcast and I really hope you are, please review us on Apple Podcasts or Spotify or wherever you get your podcasts. It really helps get the word out. If you're interested, you can also sign up for the Three Takeaways newsletter at threetakeaways.com, where you can also listen to previous episodes. You can also follow us on LinkedIn, X, Instagram, and Facebook. I'm Lynn Toman, and this is Three Takeaways. Thanks for listening.