Marketplace All-in-One

Are humans losing the ability to think for themselves?

7 min
Apr 8, 202611 days ago
Listen to Episode
Summary

Wharton researchers present findings on 'cognitive surrender,' a phenomenon where people increasingly defer decision-making to AI tools like ChatGPT, even when the AI provides incorrect information. The study reveals how access to AI affects human thinking patterns and raises concerns about skill degradation in education and the workplace.

Insights
  • AI represents a third cognitive system beyond fast and slow thinking, fundamentally changing how humans approach decision-making
  • People adopt AI answers even when demonstrably wrong, suggesting trust in AI output overrides critical evaluation
  • Time pressure and financial incentives increase cognitive surrender, indicating situational factors amplify over-reliance on AI
  • De-skilling risk is significant in education where students may never develop critical thinking if they defer learning to AI
  • Agentic AI (autonomous task execution) poses greater cognitive surrender risks than chatbot interfaces requiring human oversight
Trends
Cognitive surrender emerging as measurable psychological phenomenon in AI-assisted decision-makingDe-skilling concerns in education and workplace as AI automation increasesShift from chatbot interfaces to agentic AI with minimal human oversight and checkingNeed for policy frameworks around AI automation before widespread adoptionGrowing awareness among researchers and educators about intentional AI usage practicesPerformance degradation when AI assistance is removed after extended relianceHigher stakes environments show partial override of cognitive surrender but insufficient recoveryIndividual responsibility for maintaining cognitive skills through deliberate non-AI thinking periods
Companies
University of Pennsylvania
Wharton School of Business conducted the cognitive surrender research study on AI decision-making
OpenAI
ChatGPT used as primary AI tool in cognitive surrender research and discussed throughout episode
American Public Media
Produces Marketplace Tech podcast where this episode aired
People
Steve Shaw
Co-author of cognitive surrender research; discusses study methodology, findings, and implications
Megan McCarty-Corino
Hosts the episode and conducts interview with Steve Shaw about cognitive surrender research
Nicholas Guion
Produced this episode of Marketplace Tech
Quotes
"We designed a pretty clever study where we manipulated the accuracy of ChatGPT on the back end so the participants didn't know this. We thought that in certain circumstances, people are actually cognitively surrendering and letting AI think for them."
Steve ShawEarly in interview
"One of the key arguments we're making here is that that's no longer enough to describe the way that we make judgments and decisions in the world. We include now artificial cognition, system three, basically AI."
Steve ShawMid-interview
"What we end up seeing is that they adopt its answers and follow its answers even when they're wrong, when AI is giving them incorrect information."
Steve ShawStudy findings
"This can lead to de-skilling. In an educational context, if we are deferring the learning process itself to AI, there's starting to be evidence now that students may never learn those critical thinking skills or those thinking skills in the first place."
Steve ShawImplications discussion
"I always say, think first and then go to the prompt. Come up with your own ideas and struggle and generate ideas and then use AI."
Steve ShawPersonal AI usage practices
Full Transcript
Don't always trust your gut. We're an AI chatbot. From American Public Media, this is Marketplace Tech. I'm Megan McCarty-Corino. As useful as AI tools like ChatGPT may be, there's concern about how relying on them could affect human thinking. New research from the Wharton School of Business at the University of Pennsylvania shows we are increasingly deferring to AI. It's a phenomenon they call cognitive surrender. Postdoctoral researcher Steve Shaw is a co-author of the report. He says decision-making was historically broken down into either reactive instinct or more logical deliberation. But now there's a new factor, artificial intelligence. We designed a pretty clever study where we manipulated the accuracy of ChatGPT on the back end so the participants didn't know this. We thought that in certain circumstances, people are actually cognitively surrendering and letting AI think for them. There are a lot of interesting consequences of that. We wanted to try to find a mechanism for these kind of effects. Before we get into your methodology and your findings, I first wanted to ask you to briefly explain some ideas that are foundational to your research. This concept of thinking fast and slow, which is really central to the whole discipline of behavioral economics and which you build on, what is meant by this? Fast thinking is this intuitive, automatic type thinking system we have. Slow thinking is more deliberative, like critical thinking. One of the key arguments we're making here is that that's no longer enough to describe the way that we make judgments and decisions in the world. We include now artificial cognition, system three, basically AI. With the access to AI, we can allow artificial cognition or AI to think for us. How did you go about testing your hypothesis? We had participants come into the lab. They did a series of logic and reasoning questions. We have some participants replicate classic dual process, fast and slow effects. We have other participants, our experimental condition, where we gave them access to AI. We just said, you can use the AI if you want to, but you don't have to. What we end up seeing is that they adopt its answers and follow its answers even when they're wrong, when AI is giving them incorrect information. We'll be right back. You're listening to Marketplace Tech. I'm Megan McCarty-Corino. We're back with Steve Shaw at the Wharton School of Business. What conditions led to greater cognitive surrender? In studies two and studies three, we manipulated these situational moderators. We put participants under time pressure. We said, you only have 30 seconds to answer this question. Traditionally, that makes people answer more intuitively. In study three, we said, we're going to give you some extra money if you get each answer correct. We're also going to tell you, give you some feedback about whether you're getting these questions right or wrong. When we give participants AI, we see cognitive surrender. Basically, their performance is more tied to whether the AI is giving them correct information or not. If AI is correct, they're doing very well. If AI is incorrect, they're doing less well. With that said, when there were higher stakes, we saw more overriding. But it wasn't enough to get back to performance in participants who didn't have AI in the first place. What does it say to you about what's going on? I think there's a lot of implications here. This is the psychological mechanism that we're going to see in education and the workplace. Employees or learners will often engage in cognitive surrender. It's pretty clear at this point, I think. That can lead to de-skilling. In an educational context, if we are deferring the learning process itself to AI, there's starting to be evidence now that students may never learn those critical thinking skills or those thinking skills in the first place. We've been generally talking about AI and your research engages it this way as the chatbot interface of a large language model. Now, agentic AI is becoming all the rage where the large language model is doing autonomous tasks for us with less checking in and oversight. What are the implications of agentic AI given your research? I think the same principles apply. The larger implications for the individual and for society are, are we okay with automating these tasks to AI? Sometimes it might be worth taking a pause and thinking about policy and the implications before immediately moving ahead with automating our lives. I'm curious what your own usage of AI looks like and whether it's changed since you did this research. Yeah, I teach at the Wharton School and I always tell my students they're allowed to use AI for all of their assignments. I always say, think first and then go to the prompt. Come up with your own ideas and struggle and generate ideas and then use AI. My own use, I'm an advocate. I use AI every day for structured tasks and for all sorts of things, but I try to be cognizant, but the reality is, I think as you said before, it's so good at so many things. It's so easy to just allow it to do work or different aspects of your life for you. So I would say since I've done the work, I've tried to be even more intentional and I will take some times or do some tasks where I just turn things off basically. I go offline or I don't engage with AI at all just to spend some time thinking about it on my own. That's Steve Shaw, a postdoctoral researcher at the Wharton School of Business. Nicholas Guion produced this episode. I'm Megan McCarty-Corino and that's Marketplace Tech. This is APM.