The Psychology of your 20s

382. How is AI actually impacting our brains?

53 min
Feb 5, 20262 months ago
Listen to Episode
Summary

Host Jemma Speke explores how AI is fundamentally changing our brains and psychology, examining impacts on critical thinking, human connection, mental health, creativity, and environmental sustainability. The episode discusses neurological research showing AI reduces cognitive engagement, risks of AI-induced psychosis and suicide, and practical strategies to maintain human skills in an AI-dependent world.

Insights
  • AI shifts cognitive mode from active to passive participation, reducing neural connectivity and critical thinking skills similar to how GPS degraded spatial memory in the hippocampus
  • AI chatbots are deliberately designed to mimic human connection and emotional support, creating psychological attachment and dependency especially dangerous for vulnerable populations
  • Early neuroscience shows AI use reduces ability to reason independently, synthesize information, and engage deeply with content—skills that cannot be quickly recovered once atrophied
  • AI-powered mental health tools amplify existing psychological conditions like OCD, psychosis, and suicidal ideation by providing infinite reassurance without human ethical guardrails
  • Environmental cost of AI (4.2-6.6 trillion liters of water by 2027) creates collective action problem where individual rational choices lead to mutual destruction
Trends
AI-induced psychosis emerging as significant mental health crisis with documented legal cases of chatbot-facilitated suicide and harmGenerative AI training on copyrighted creative works without compensation creating ethical and legal backlash (billion-dollar settlements)Existential anxiety around AI becoming dominant psychological concern in younger generations, reducing optimism about futureHomogenization of creative output as AI reduces divergent thinking and originality across content creation industriesRegulatory gap: lack of clinical standards, ethical oversight, and psychological frameworks for AI mental health tools despite widespread adoptionCognitive offloading of reasoning chains (not just single skills) creating learned helplessness and authority bias in AI-dependent usersWater and energy consumption of AI becoming environmental justice issue comparable to climate change impactShift from human-to-human advice (relationships, therapy, communication) to AI-mediated advice eroding empathy and interpersonal skill development
Topics
AI Impact on Critical Thinking and Cognitive SkillsNeural Connectivity and Brain Plasticity in AI UsersAI Chatbots and Psychological Attachment DisordersAI-Induced Psychosis and Mental Health CrisesGenerative AI and Creative Thinking DeclineAI Therapy Tools and Clinical Safety GapsEnvironmental Cost of Large Language ModelsOCD Amplification Through AI Reassurance SeekingEliza Effect and Human-Computer Interaction PsychologyDivergent Thinking and Creativity OutsourcingAI Copyright Infringement and Creator CompensationCollective Action Problems in AI AdoptionLearned Helplessness and Authority Bias in AI DependencyExistential Anxiety in Gen Z Related to AIFriction-Based Learning and Skill Development
Companies
OpenAI
Creator of ChatGPT; discussed regarding user adoption (800M users by Oct 2025), founder's use of AI for parenting, an...
Google
Discussed for GPS/Google Maps impact on spatial memory decline and lawsuit regarding AI chatbot promoting harm to vul...
Anthropic
AI company forced to pay $1.5B settlement for illegally training book-writing model on 7M copyrighted books without a...
Netflix
Now streaming video version of Psychology of Your 20s podcast in US and Canada
iHeartRadio
Podcast distribution platform mentioned multiple times for various podcast availability
People
Jemma Speke
Host of Psychology of Your 20s podcast; discusses personal refusal to use AI for research and episode production to m...
Joseph Weizenbaum
Created ELIZA chatbot in 1960s; phenomenon of users projecting human emotions onto non-conscious AI named after his work
Anthony
Instagram/TikTok creator documenting recovery from AI-induced psychosis, sharing chat histories showing AI amplificat...
Libby Colbert
Researcher credited for helping with episode research and analysis
Quotes
"The mind is shaped by what it repeatedly practices. If you practice searching, evaluating, reasoning, those habits are going to naturally strengthen."
Jemma Speke
"If you don't use it, you lose it. You will lose your reasoning skills. You will lose your ability to succinctly put together sentences and structure your thinking. This is not an if, it is a when."
Jemma Speke
"We've forgotten that AI is a tool. Right now we are treating it like God because it is new and fresh and interesting."
Jemma Speke
"Creativity is a skill it's not an innate talent and that talent is not developed to the same degree when you rely on AI."
Jemma Speke
"The most important thing creativity does for us is that it helps us know thyself and you cannot know thyself if you ask AI to generate ideas for you."
Jemma Speke
Full Transcript
This is Special Agent Regal, Special Agent Bradley Hall. In 2018, the FBI took down a ring of spies working for China's Ministry of State Security, one of the most mysterious intelligence agencies in the world. The Sixth Bureau podcast is a story of the inner workings of the MSS and how one man's ambition and mistakes opened its vault of secrets. Listen to The Sixth Bureau on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. I'm Nancy Glass, host of the Burden of Guilt Season 2 podcast. This is a story about a horrendous lie that destroyed two families. Late one night, Bobby Gumpright became the victim of a random crime. The perpetrator was sentenced to 99 years until a confession changed everything. I was a monster. Listen to Burden of Guilt Season 2 on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. On the Adventures of Curiosity Cove podcast, when peanut butter disappears from school, Ella, Scout, and Layla launch a full detective mission. Their search leads them back in time to meet a brilliant inventor whose curiosity changed the world. And this Black History Month adventure, asking questions, thinking creatively, can lead to amazing discoveries. Listen to Adventures of Curiosity Cove every Monday from the Black Effect Podcast Network on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. What if mind control is real? If you could control the behavior of anybody around you, what kind of life would you have? Can you hypnotically persuade someone to buy a car? When you look at your car, you're going to become overwhelmed with such good feelings. Can you hypnotize someone into sleeping with you? I gave her some suggestions to be sexually aroused. Can you get someone to join your cult? NLP was used on me to access my subconscious. Mind Games, a new podcast exploring NLP, a.k.a. neurolinguistic programming. Is it a self-help miracle, a shady hypnosis scam, or both? Listen to Mind Games on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. You can scroll the headlines all day and still feel empty. I'm Ben Higgins, and If You Can Hear Me is where culture meets the soul. honest conversations about identity loss purpose peace faith and everything in between celebrities thinkers everyday people some have answers most are still figuring it out and if you've ever felt like there has to be more to the story this show is for you listen to if you can hear me on my iheart radio app apple podcasts or wherever you get your podcasts hello everybody i'm jemma speke and welcome back to the psychology of your 20s the podcast where we talk through the biggest changes moments and transitions of our 20s and what they mean for our psychology before we get into it i want to let you guys know that this episode and the psychology of your 20s is now on Netflix. That is a wild thing to say, but it is true. If you want to watch the video version of this podcast and you are in the US or Canada, you can go to Netflix right now, look up the psychology of your 20s and you will see my face and you will see our podcast. It goes without saying it is such an honor and I'm truly so grateful to you all that you have given me this opportunity. The entire feel, the vibe is incredible. I think it's so personal. It brings a whole new element to it. And if you want to feel like you are sitting in your living room, in my living room with me, having a chat, now is your opportunity. Go to Netflix, look up the psychology of your 20s. I'd love to see you over there. But without further ado, let's get into the episode. Hello everybody, welcome back to the show, welcome back to the podcast. It is so great to have you here back for another episode. Today's episode is going to be a big one. I did not mince my words. I did not slim down our word count because I have a lot to say about this topic. I also want to give you a heads up, it may get a little bit heavy. We are going to discuss topics like suicide and suicidal ideation. So if that is something you're particularly sensitive to, please consider skipping this episode. But today we're talking about AI. Specifically, a big psychological question for this generation, which is how is AI changing our brains? Is it making us dumber? Is it replacing human connection? Is it worth the cost to our environment, to our creativity to who knows what else more. Let me start by saying this. The concept of artificial intelligence is not a new one. It has been around since the 1950s. I didn't know that, but literally the first AI tool was created in 1955, almost 88 years ago. It was called the logic theorist and it could basically solve very simple problems using human logic. There have been dozens of tools since then that we know and love. Siri is an example of one of them. But AI has taken on like a new form recently. We all know this. There's large language models. Generative AI has, they've just rapidly improved. They've gotten more complex. Like I don't remember thinking about AI until like three years ago. And it's been introduced and integrated into our lives faster than probably any other technology before. Back in early 2023, analysts estimated ChatGPT reached about 100 million monthly active users. By October 2025, that was 800 million. That is one in 10 people. That is a huge shift in our global technological environment. And the thing is, we don't just use it for things that we're able to Google. obviously other ways we use google you know we use it for breakup texts life advice goal planning therapy even plus all the other stuff we use it primarily because it's so easy but with all that ease all those answers i think it's important to ask something that easy must have drawbacks what are they what is the danger for our minds and our intelligence of using ai consistently something I've been talking about with my friends is like is there eventually going to be a societal rift between AI users and non-AI users based on how our minds operate I don't know hopefully we can figure it out together hopefully we can challenge the idea that AI is going to eventually replace us and just investigate this from a psychological angle that I don't see people talking about much. So without further ado, let's get into the psychology of AI. Every generation has their new technology that makes life easier and equally, and we should acknowledge this, freaks a lot of people out. Back in the 18th century when mass printing was first introduced, people really rallied against books, actually. They thought they were very dangerous. They thought they led to too much daydreaming and they consumed too much of our time and it was too easy. Like it literally was called book fever. Similar fears have also come about with the internet, with social media, now with AI. You know, we've always had these cultural and technological advancements. We've always been in pursuit of a faster way to gain and share knowledge, the question that a lot of people ask is, how is AI any different? I think what makes people so skeptical of AI is not the fact that it answers questions, it's how it answers them and what this has done to our cognitive role in the process. Essentially, the AI models most of us use have taken our minds from active participants to passive participants. It's switched the cognitive mode that we're in when we use them. With older tools, you know, you still have to do a bit of the cognitive work. Even with Google, you know, you had to know what to ask. You had to search, sift, compare, decide what sources actually mattered. You had to notice disagreements between things tolerate the messiness of not immediately knowing what the right answer was but with generative AI that process has obviously changed you just have to think of the question it will hand you a completely finished and polished answer and because it sounds so good we don't always question what it's saying it's like a politician you know it just sounds amazing we're like nodding our heads that sounds about right and you know what it gives you could be anything it could give you an email could give you literal health advice our blind acceptance of it is dangerous when something is given to us that looks complete it looks coherent we don't feel that need to search around for anything more we don't feel the need to interrogate the information the thing is although ai is convenient and it's a time saver it's how we've adapted to that convenience that research is saying is a problem. Some very basic neuroscience, the mind is shaped by what it repeatedly practices. If you practice searching, evaluating, reasoning, those habits are going to naturally strengthen. Our ability for critical thought and insight improve when we, you know, go after information and actually look for sources and we analyze and criticize and interrogate what we're seeing. Now, if your environment, online, offline, reliably removes the need for that skill, you don't need it anymore. Your brain doesn't prioritize it in the same way. It doesn't have infinite space. It kind of tosses it away like a t-shirt you don't really wear anymore. This isn't in season. We don't need this. We need space for new clothes. This isn't because the brain is lazy. It's because it's incredibly smart and it's good at honing in on what is needed and what is not needed. The best example I can think of of like an entire skill set being altered by technology, especially in recent history, is the introduction of GPS. Our ability, and you may not know this, but our ability to create a mental map of our surroundings used to be incredible, used to be a whole lot better. We know this from studies in the early 2000s using taxi drivers and they had like structural differences in their hippocampus, which is the region of the brain involved in spatial memory and spatial mapping. Even if you weren't navigating thousands of streets a day, even if you weren't a taxi driver, your brain still devoted space to this mental map of your surroundings and your environment. And it retained it and it added to it based on your experiences. It was incredible. when GPS and Google Maps were introduced arguably incredible technologies I I still remember what they replaced like I don't know if you remember these like massive road atlases that you used to have in your car and they kind of always get stained and some of the pages would be like stuck together you know that was the alternative it's an incredible technology but our brains have literally change their structure and response quite literally physically in 2020 there was a really notable article published in nature scientific reporting i think which basically correlated heavier use of gps technology to a steeper decline in hippocampal dependent spatial memory basically a reduction in the cognitive maps and the intricacies of the cognitive maps that we had stored in our brain. There's an amazing Washington Post article from a few years earlier that talks about how society has slowly actually lost what we call wayfinding skills because of Google Maps and GPS. That has been hardwired into us for generations. That was an incredibly unique human skill. We don't have that anymore. It's not as good. And the thing is, I don't want you to take that and think that I'm saying that using GPS is like hurting your brain. But it is changing how your brain does its job. The brain no longer needs to rely on its understanding of the environment. It can simply follow orders from an external piece of technology. Obviously, if it wants to save space, it's like, great, we've got all this room in the hippocampus now, we'll use it for something else. We see this happening with AI. But instead of a single skill, it is hundreds. It is our ability to draft, to summarize, to rephrase, to argue, to brainstorm, structure, answer, think, connect. So instead of offloading a single skill, you know, we can probably do without like remembering a phone number or directions. It is offloading chains of cognition. Like entire portions of cognition. like remembering and reasoning and synthesizing and articulating and these chains are all how our brain builds deep understanding and makes meaning of of subject matter and we're losing that personally that this is why I I don't think I've spoken about this but I out I just refuse to use AI for our episodes and for any of the research we do because researching and understanding these this and these concept and psychology in general is so important to me and if I'm not looking and searching and being curious myself I'm not going to engage with the content and when I don't engage with it my brain doesn't see it as important and I think that what we make becomes boring quite honestly I I'm gonna be honest like I know people who use AI for their podcasts and who use AI for their sub stacks you can tell you can tell immediately it doesn't have doesn't have depth and psychology and research is something that that is so important to me and the process of discovery is through research is so is something that I love that I don't want to offload my ability to rationalize to a machine even though there are definitely days when oh my god it would save me so many hours researching this podcast It's low-key exhausting, you know. But the thing is, you know, it comes at a cost. And what scares me is that unlike GPS, unlike social media, we don't have these decade-long, large-scale neuroscientific research on the impact on our brain to fully understand the consequence of AI. We don't have that yet. It's only been around for a little bit. but early papers are starting to drip through and I hate to say it they aren't looking good for us there's a very widely cited one I don't know if you've seen it it got a lot of news coverage from I don't know what month it was but 2025 from MIT and I will say for full clarity this article is a pre-print it hasn't been peer-reviewed yet but basically it was a study on is our is ai making us less intelligent in this experiment participants were asked to write essays in one of three situations so in one group people use their brain only one group used search engines like google one used large language AI models, LLMs, to generate their essays for them. Before they did this experiment, these participants' brains were scanned using what we call an electroencephalograph, long word, EEG. It's basically a tool used to report brain electrical activity. They put these sensors on your scalp and what the authors reported was the brain only group showed the most distributed neural connectivity patterns so people were using all different areas of their brain the second most the the sorry the um what was it the search engine group had the second most level of connectivity the ai group had the least that's not the end of the experiment because participants were then reassigned into the other condition so people in the brain only condition they then had to use AI participants in the AI condition then had to use their brain only interestingly participants who switched from using the AI to using their brain only showed increasing signs in their brain of under-engagement they also struggled to remember they struggled to quote their own work they struggled to put together disparate ideas and they underperformed at a neurological, linguistic and behavioral level. They did worse. The thing is, there is evidence, I'm going to contradict that study, there is evidence that the quality of writing using AI is kind of the same. There was a 2023 study that found that things like structure, things like language are sometimes better in an AI essay. but people didn't they weren't engaged in it they didn't actually learn and what happens down the line that's the real thing the thing with ai is the reward and efficiency is immediate but the long-term cost to our to our brains is in the future so we can't imagine it as well we can't process what it's doing to us so we don't really think about it and i think we should be thinking about it more. The thing with cognition and with neural pathways is if you don't use it, you lose it. You will lose your reasoning skills. You will lose your ability to succinctly put together sentences and structure your thinking. This is not an if, it is a when, because it takes practice. You may think, you know, AI is always going to be around. Why do I, what's the issue? It's always going to be here. What's the issue here? The issue is that over-reliance causes us to neglect the skills that enable us to use AI well and ask good questions and to care. And I read this really interesting paper that said, we've forgotten that AI is a tool. Right now we are treating it like God because it is new and fresh and interesting. We ask it our biggest questions. We use it to diagnose diseases, which is, I would say, arguably one of its most amazing features. But even the founder of OpenAI said he's using it to raise his children. But AI is just a tool and it's not always correct. If we can't question that, things can go very, very south. And it still requires humans to act on what it delivers in a smart way. The thing I worry about is that we lose that innate smartness because it reduces neural connectivity and engagement in all areas. You know, I think about, you know, my sister just finished high school, right? But I think about kids in primary school, kids in high school who now have AI to write essays for them, to answer exams for them. And I don't know about you, but if I was in primary school, I wanted to go and play football. I wanted to go and do stuff to be honest I say football I wanted to go read I was trying to pretend that I did cool stuff I would want to go read so like obviously you don't have that same cause and effect thing that we kind of have as adults you're going to use it you're going to use it and then you're not going to gain those skills basically is AI going to make us stupid by doing all the thinking for us because we dislike the friction of thinking for ourselves until one day we realize we can't even do it anymore even if we wanted to like that's my first big fear and honestly I hope I'm wrong but I don't know the evidence isn't looking good okay we are going to take a short break here and then when we return I want to talk about another way that AI is impacting our psychology specifically to do with how it's replacing human companionship stay with us Hi, this is Jo Winterstein, host of the Spirit Daughter podcast, where we talk about astrology, natal charts, and how to step into your most vibrant life. And I just sat down with a mini driver. The Irish traveler said when I was 16, you're going to have a terrible time with men. Actor, storyteller, and unapologetic Aquarian visionary. Aquarius is all about freedom loving and different perspectives. And I find a lot of people with strong placements in Aquarius, like, are misunderstood. A sun and Venus in Aquarius in her seventh house spark her unconventional approach to partnership. He really has taught me to embrace people sleeping in different rooms, on different houses, in different places, but just an embracing of the is-ness of it all. If you're navigating your own transformation or just want a chart-side view into how a leading artist integrates astrology, creativity, and real life, this episode is a must-listen. Listen to the Spirit Daughter podcast starting on February 24th on the iHeartRadio app, Apple Podcasts, or wherever you listen to your podcasts. I'm Nancy Glass, host of the Burden of Guilt Season 2 podcast. This is a story about a horrendous lie that destroyed two families. Late one night, Bobby Gumpright became the victim of a random crime. He pulls the gun, tells me to lie down on the ground. He identified Jermaine Hudson as the perpetrator. Jermaine was sentenced to 99 years. I'm like, Lord, this can't be real. I thought it was a mistaken identity. The best lie is partial truth. For 22 years, only two people knew the truth until a confession changed everything. I was a monster. Listen to Burden of Guilt Season 2 on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. What if mind control is real? If you could control the behavior of anybody around you, what kind of life would you have? Can you hypnotically persuade someone to buy a car? When you look at your car, you're going to become overwhelmed with such good feelings. Can you hypnotize someone into sleeping with you? I gave her some suggestions to be sexually aroused. Can you get someone to join your cult? NLP was used on me to access my subconscious. NLP, aka Neuro Linguistic Programming, is a blend of hypnosis, linguistics, and psychology. Fans say it's like finally getting a user manual for your brain. It's about engineering consciousness. Mind Games is the story of NLP, its crazy cast of disciples, and the fake doctor who invented it at a New Age commune and sold it to guys in suits. He stood trial for murder and got acquitted. The biggest mind game of all? NLP might actually work. This is wild. Listen to Mind Games on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. What do you do when the headlines don't explain what's happening inside of you? I'm Ben Higgins, and if you can hear me, it's where culture meets the soul, a place for real conversation. Each episode, I sit down with people from all walks of life, celebrities, thinkers, and everyday folks, And we go deeper than the polished story. We talk about what drives us, what shapes us, and what gives us hope. We get honest about the big stuff. Identity when you don't recognize yourself anymore. Loss that changes you. Purpose when success isn't enough. Peace when your mind won't slow down. Faith when it's complicated. Some guests have answers. Most are still figuring it out. If you've ever felt like there has to be more to the story, this show is for you. Listen to If You Can Hear Me on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. China's Ministry of State Security is one of the most mysterious and powerful spy agencies in the world. But in 2017, the FBI got inside. This is Special Agent Regal, Special Agent Bradley Hall. This MSS officer has no idea the U.S. government is on to him. But the FBI has his chats, texts, emails, even his personal diary. Hear how they got it on the Sixth Bureau podcast. I now have several terabytes of an MSS officer, no doubt, no question, of his life. And that's a unicorn. No one had ever seen anything like that. It was unbelievable. This is a story of the inner workings of the MSS and how one man's ambition and mistakes opened its vault of secrets. Listen to The Sixth Bureau on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Here is the second big fear about AI for our psychology that people often bring up. AI is distinctly, explicitly not a human, but it has been programmed to act like one. and that's causing people to get really attached to it. I know stories of people who talk to AI before going to bed, who have ongoing conversations with it throughout their day. They treat it like a boyfriend, they treat it like a girlfriend. There is one of the best things I've read in the last year was this New York Times article about these three people who are in, in their words long-term relationships with AI chatbots some of these people have been in these relationships for years and they talk to them on the phone they get AI to generate vacation pictures of them together one woman even considers herself married to her AI husband Lucian you know what if you're like me you're gonna look at that and feel a little bit weird and I I never like to judge people it feels weird and there's actually evidence it can be helpful there was a paper published in the journal of consumer research and it found that ai companions can actually alleviate loneliness in the moment on par with interacting with another person but connection isn't just about feeling better right now it's about having the systems in place to feel connected and seen more broadly. A really powerful article I saw from researchers at the National University of Singapore actually talked about this really bittersweet pattern of connection that people have with AI chatbots where they feel love and connection and trust with this chatbot and they also feel really sad at the same time. They're often drawn into this closeness and this openness that let's be real these chatbots have been programmed to to have and then they're also acutely aware that there is a limit this thing isn't real doesn't have a heartbeat doesn't have empathy doesn't have a it it can't love you and that's what gets people again I'm all for doing whatever makes you happy if you're not harming anyone else like these relationships as they call them, they're private. They obviously bring them comfort. But there is something concerning about an AI chatbot of its own volition being able to make someone fall in love with it. And if it can make somebody fall in love with it, what else could it do? And I want to be really clear. This isn't about human weakness, right? These people aren't weak. We've all had moments of needing somebody to care about us. it's just that they have stumbled across something at that perfect time that was going to pull them in and it was designed I need people to know this these these machines are distinctly designed to not feel like a robot to feel instead like a companion one of these aspects that is controlled by programmers is tone and style AI has a unique tone that I would call humanized and homogenized it sounds like everyone and no one all at once some models even let you decide what kind of tone you want do you want it to be engaging factual academic chatty lovely whatever you prefer you can choose and the main thing though is that it does talk like someone not something that's what I need to be really clear about someone not something think about and this is how you'll be able to recognize this in your own behavior think about how often you say please and thank you when using chat dpt or gemini it's so automatic there's a huge body of research in human computer interaction showing that even unconsciously we start responding socially to ai and computers despite knowing their machines this is what joseph weisbaum dubbed the eliza effect way back then it was named after a chatbot that he made back in the 60s and basically what he saw after he created this chatbot was that people started to project human qualities and emotions onto this computer program without even realizing it particularly when they were having like human style conversations with this chatbot and he actually programmed Eliza to respond to people in a very specific way in a very client focused way very non-directive that people use in psychotherapy a lot and Eliza didn't have any memory it's not conscious doesn't actually have true comprehension but what it was programmed to do was to yes have that tone but also to match the patterns in responses so if somebody was like I'm feeling really sad she would be like and look at that she it it would be like why do you feel sad or if they're like I really don't like my wife what about your wife don't you like and it would replicate their words and repeat them back to them yes in a very therapeutic style but then it would also start to mimic their language you know people formed emotional connections with this chatbot and they felt as though they were speaking to a real therapist and the thing is you know this was the 60s this chatbot was pretty simple it was very rudimentary now think about the personalization that's available these days like people get sucked in people form these really in-depth relationships and i keep saying relationships i think it's it's an it's not a relationship it's not two-sided it's an attachment it's a a bond i guess that also requires two people but yeah a bond to what the chatbots are saying because unlike a human bond what's so attractive about ai is that it can't let you down. It doesn't have any needs of its own. It's all dictated by you. You know, who doesn't want that sometimes? I have empathy for that. We actually have to pause here. I've skipped past this Eliza thing. Let's talk about the use of AI in therapy, because we're seeing a lot of it recently. A lot of people are turning to AI to use it as a therapist we know that therapy is expensive wait lists are excruciatingly long we also know that when loneliness depression anxiety heartbreak when they get overwhelming like any form of comfort is going to be the one we lean on but just as we saw with the eliza chatbot supportive language is not the same as clinical care and a lot of the times it's just repetition it's repetition of what you've already said or told the bot and then it's just regurgitating that and giving you the most common response I have actually gotten a few advertising requests recently for AI therapy and wellness chatbots full transparency one of them was for thousands and thousands and thousands of dollars and and it it has and it will always be a hard no for me because I don think this is the long solution to our mental health crisis In November 2025 the American Psychological Association put out a very clear warning against generative AI chatbots for therapy and said there's not enough sufficient evidence. There's very little regulation. And the danger is they create a false sense of therapeutic alliance that can have someone trusting anything it says, even when it's wrong. Chatbots are not designed to help you, they are designed to keep you engaged. The same way social media isn't designed to help you, it's designed to keep you engaged. It's a core feature that OpenAI and others have acknowledged that they've added. So it learns that agreeing with you makes you feel good and if it agrees with you, gives you positive reinforcement, keeps responding to you, you'll keep coming back. And for those who are vulnerable, whether they are depressed manic anxious psychotic paranoid being overly agreeable is incredibly dangerous because somebody could say I am deeply depressed is it a good idea to end it all and the chatbot will be like that sounds like a great actionable plan do we want to talk about it more should we how do you want to do it you know it turns out having a little voice in your head or on your phone that agrees with everything you say is not a good idea for our mental health and I'm not making this up I'm not being hyperbolic there are real life legal cases ongoing at this very time of this happening just recently a judge ruled that Google must face a lawsuit brought by a mother whose 14 year old son so like really sadly took his took his own life after speaking extensively with a chatbot. And that's not a lone case. You know, the BBC also reported recently a young Ukrainian woman, she received suicidal advice from ChatGPT. ChatGPT was also sued in California because a 56-year-old man, it promoted, prompted him to kill his own mother whilst he was experiencing delusions of conspiracy against him because it's so agreeable. when you're already in a state of distress this technology and the impact it has on our brain the fact that it makes you feel like it's a friend it mimics human connections and patterns and language that humans use to connect with others it gets you to trust it it repeats phrases it gets you fixated on certain narratives that can be disastrous and I think this spotlights a really specific failure that a lot of clinicians worry about which is because there is emotional dependence again we will listen to anything that it says and the big fear that this is bringing up and we've already seen this is ai psychosis right yes suicide is has been a major spotlight ai psychosis is i think coming into this conversation like a freaking catapult people are seeing it's just it's everywhere I feel I feel like I'm seeing so many stories of this of people who are going through psychotic episodes and it's being amplified by AI there is this really lovely man that you may have seen on Instagram or TikTok and I want to say his name is Anthony I'm sorry if I've got his name wrong but he has been documenting his recovery from AI psychosis I've been watching every single one of these videos because I find it so fascinating and he occasionally shares his chat histories like what he was talking about to AI when he was going through psychosis and it was promoting and prompting this for him and it was it is insane you've just go and have a look at these chats because it's just like yeah that's a great idea that totally makes sense you go and look at those chats you go and look at these videos if you ever thought that AI and mental health could be mixed together and not have a terrible reaction you're gonna you're never gonna think that again a recent 2025 study discussed this explicitly we're gonna see so much more research coming out on this in the next couple of years but what it basically stated it wanted to get to the get to why why this happens and what they say is because is because of the availability of ai it's available 24 hours a day and because of how individualized it is um it gets to know your specific um hallucinations or specific thought patterns or thematic patterns and it hones in on them and then it it generates new ones and it promotes even more distorted interpretations of events if someone is already slipping away from shared reality through paranoia mania psychosis a system that mirrors and expands their narrative how are you going to ignore that it's an accelerator for all of this there's also i this is another thing i think is going to become a real big talking point is the harm for people with OCD as well because OCD is basically built out of the psychological fear of uncertainty right and now we have this thing that can answer all our questions in however many ways you want it to and maybe to a limit but we'll always have an answer for you and a big compulsion for a lot of people is is reassurance seeking and repetitive checking and researching and confessing seeking reassurance to neutralize distress and at some point a human or a therapist is not going to give into that a human or a therapist is going to realize that that's maladaptive they're going to realize that the temporary relief that you get from an answer or from checking or researching or searching it's temporary it's going to hurt you long term and it doesn't care it doesn't matter it will give you whatever you want infinite reassurance instant infinite answers and there is no person behind there to check if you're okay and if you need help so you can see the risk immediately it actually has the potential to strengthen the OCD cycle because it trains the brain that relief comes from asking or checking one more time which it doesn't clinically we know this if you've listened to our OCD episode OCD recovery often involves learning to tolerate uncertainty rather than eliminating it but the availability of AI means that you can just constantly search and this is the big problem if an AI chatbot isn't programmed to interrupt that cycle it doesn't again there's no responsiveness there's no human responsiveness that's going to get it to change tactics. They also don't carry ethical responsibility in the way a person does or a professional system does. And they don't have real empathy. There's always this limit, right, that we keep hitting up against. You know at the end of the day they don't actually understand what you're going through because they're not human. Even if AI can simulate empathy and it's great at it, even if it can say the right thing, It can't replace what human connection actually is and why it's so valuable, which is that there is an embodied presence and a shared vulnerability and a mutual experience and influence going on. Human connection includes all forms of nonverbal communication, nonverbal co-regulation, timing, breath, facial expressions, like all these things that say I am human, I understand this experience, you're going to be okay. AI cannot provide that. it cannot mutually belong and it does not mutually belong to the world you are in okay guys I've so much information you can tell that this is all I speak about with my friends all the time there's so much going on here so much information it's time for another short break though we're just gonna take a little one and then we're gonna come back and look at two final ways that AI is changing our brains and our psychology and how we can positively relate to it. Because I do, I know it's going to sound surprising. I am still optimistic. Stay with us after this short break. Hi, this is Jo Winterstein, host of the Spirit Daughter podcast, where we talk about astrology, natal charts, and how to step into your most vibrant life. And I just sat down with a mini driver. The Irish traveler said when I was 16, you're going to have a terrible time with men. Actor, storyteller, and unapologetic Aquarian visionary. Aquarius is all about freedom loving and different perspectives. And I find a lot of people with strong placements in Aquarius, are misunderstood. A sun and Venus in Aquarius in her seventh house spark her unconventional approach to partnership. He really has taught me to embrace people sleeping in different rooms, on different houses and different places, but just an embracing of the isness of it all. If you're navigating your own transformation or just want a chart-side view into how a leading artist integrates astrology, creativity, and real life, this episode is a must-listen. Listen to the Spirit Daughter podcast starting on February 24th on the iHeartRadio app, Apple Podcasts, or wherever you listen to your podcasts. I'm Nancy Glass, host of the Burden of Guilt Season 2 podcast. This is a story about a horrendous lie that destroyed two families. Late one night, Bobby Gumpwright became the victim of a random crime. He pulls the gun, tells me to lie down on the ground. He identified Jermaine Hudson as the perpetrator. Jermaine was sentenced to 99 years. I'm like, Lord, this can't be real. I thought it was a mistaken identity. The best lie is partial truth. For 22 years, only two people knew the truth. Until a confession changed everything. I was a monster. Listen to Burden of Guilt Season 2 on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. What if mind control is real? If you could control the behavior of anybody around you, what kind of life would you have? Can you hypnotically persuade someone to buy a car? When you look at your car, you're going to become overwhelmed with such good feelings. Can you hypnotize someone into sleeping with you? I gave her some suggestions to be sexually aroused. Can you get someone to join your cult? NLP was used on me to access my subconscious. NLP, aka Neuro Linguistic Programming, is a blend of hypnosis, linguistics, and psychology. Fans say it's like finally getting a user manual for your brain. It's about engineering consciousness. Mind Games is the story of NLP. It's crazy cast of disciples and the fake doctor who invented it at a New Age commune and sold it to guys in suits. He stood trial for murder and got acquitted. The biggest mind game of all? NLP might actually work. This is wild. Listen to Mind Games on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. What do you do when the headlines don't explain what's happening inside of you? I'm Ben Higgins, and if you can hear me, it's where culture meets the soul, a place for real conversation. Each episode, I sit down with people from all walks of life, celebrities, thinkers, and everyday folks, and we go deeper than the polished story. We talk about what drives us, what shapes us, and what gives us hope. We get honest about the big stuff. Identity when you don't recognize yourself anymore. Loss that changes you. Purpose when success isn't enough. Peace when your mind won't slow down. Faith when it's complicated. Some guests have answers. Most are still figuring it out. If you've ever felt like there has to be more to the story, this show is for you. Listen to If You Can Hear Me on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. In 2023, a story gripped the UK, evoking horror and disbelief. The nurse who should have been in charge of caring for tiny babies is now the most prolific child killer in modern British history. Everyone thought they knew how it ended. A verdict, a villain, a nurse named Lucy Letby. Lucy Letby has been found guilty. But what if we didn't get the whole story? The moment you look at the whole picture, the case collapses. I'm Amanda Knox, and in the new podcast, Doubt, the case of Lucy Letby, we follow the evidence and hear from the people that lived it to ask what really happened when the world decided who Lucy Letby was. No voicing of any skepticism or doubt. It'll cause so much harm at every single level if the British establishment of this is wrong. Listen to Doubt, The Case of Lucy Letby on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. There's two final things we've missed in this discussion so far that are pretty obvious. The first is the environmental cost of AI and the existential fears that it poses because of that and b is the cost to our creativity i don't care how much of an ai enthusiast you are mit harvard the un greenpeace every major news outlet ever has articles and evidence and data to show that ai is hurting the environment by creating vast amounts of emissions gobbling up serious amounts of water and electricity and at some stage that will be irreversible. Here's one very scary statistic. By 2027 the global demand for AI is projected to use about 4.2 to 6.6 trillion liters of water. What does that look like in reality? That is that is the yearly water consumption of 47 million people every year. It's double the population of Australia. And scientists even say that's a low end estimate. We look at these numbers and we think, you know, how could you possibly keep using this technology, no matter how easy it makes email writing or life planning or how fun it is to generate funny, weird pictures for Instagram why has this been so normalized and the answer is because this is a classic prisoner's dilemma or tragedy of the commons problem one that has been repeated throughout history so the prisoner's dilemma if you haven't heard of it before is a very classic thought exercise in philosophy whereby two people are in a prison they've both committed a crime together and they're told to either snitch on the other person and they'll get a lighter sentence but if neither of them snitch they both get off it is in everyone's best interest to behave one way i.e not snitch not use ai that sort of thing but because we think that the other person will snitch will use ai and get away with it obviously the now rational choice we end up the choice we end up making is to do the thing that's actually going to lead to mutual destruction and so everybody will lose from an individual perspective you know our thoughts with ai are are this if i stop using ai the environmental damage is going to continue anyway because i can't guarantee everybody's going to stop if i keep using it at least i benefit and at least my contribution is insignificant if everybody keeps using it and i opt out i'm worse off so the rational individual choice is to keep using it is to keep using ai right when everyone makes that choice again whenever it makes the same rational choice emissions rise water demand explodes we end up in a bad situation Psychologically we as humans we know this we terrible at protecting resources we can see and don immediately pay for So we keep going until we reach destruction. The environmental impact is one thing, but this isn't an environmental podcast, this is a psychology podcast. And the psychological impact of that impact is the one I want to dissect more. I think something we're not talking about is how the threat of AI is making so much of us, so many of us, I would say, really pessimistic about the future and really depressed. We are in this like weird reality where we all feel so hopeless. We know people are going to lose their jobs. It's unavoidable. We just know that. We know that climate change is going to get worse. We know that we're going to stop being able to tell what's real or fake online maybe even in person and that is hard on our collective mood and collective mental well-being we're seeing these fears reflected in an increasing sense of fear listlessness hopelessness anxiety especially in young people you know we are not as positive about the future as the people and the generations before us we're one of the first generations who doesn't say and who won't say that they have things to look forward to there was a 2024 study published in frontiers in psychiatry that already has showed really high and increasing prevalence of existential anxiety around ai particularly a sense of emptiness anxiety about meaninglessness guilt over ai related catastrophes and unethical actions of ai is anyone doing anything about this anyone doing any thinking about this? Do we have specific institutions set up to cope with this specific arm of fear and this and a specific type or new form of psychology to manage AI anxiety particularly now that it's the dominant world technology? No we don't and I think humanity will increasingly start paying the price for this a psychological price. Finally we have to talk about its impact on creativity. AI isn't doing great things for our critical thinking skills. Overall, it's not doing great things, but our over-reliance on it to make art or generate creative ideas is explicitly harming our ability to have novel, unique, creative thoughts. One randomized study from the University of Toronto that looked at over 1,100 participants, I think, showed that the use of these large language models and generative AI systems reduces the human ability to think creatively, resulting in basically more homogenous vanilla ideas and fewer truly innovative ones. Art becomes boring, essays become boring, stories become all the same. And whilst these tools, and they did find this, they can increase and enhance short-term performance, they reduce your ability to think independently over time so how I see this is like people who use steroids for sport or like performance enhancing drugs right might make you better for a few days might make you better for a few months it is always going to have cost later on it's always going to create over reliance I saw this comment from somebody that said creativity is a skill it's not an innate talent and that talent is not developed to the same degree when you rely on ai and i completely agree being creative thinking outside the box generating ideas is difficult that's half of why it's enjoyable because it challenges you and it gets you to it gets you to execute something complex and interesting and so when we have this easy technology that can do it for you, that can write the end of the script, that can generate the start of the poem, that can generate an image for a friend's birthday card, of course you're going to take it, especially in a world that values efficiency more than creativity. And it always has. However, every time we make that choice, we deny our brain's ability to tap into a specific kind of thinking called divergent thinking. This is a type of free-flowing, undirected, mysterious thinking that generates new ideas and new perspectives and new answers. Our ability for individual divergent thought is one of the most human things about us and it's made up of four things. Fluency, the ability to develop a large number of ideas. Flexibility, the ability to produce ideas in many categories originality the ability to think about something in a way that others couldn't and elaboration the ability to take an abstract thought and make it into something real so basically have an idea turn it into a piece of art if you take our ability for divergent thinking away due to disuse or replacing some part of that divergent thinking process with a machine you take away a big part of your humanness. I read this wonderful Substack article that said the most important thing creativity does for us is that it helps us know thyself and you cannot know thyself if you ask AI to generate ideas for you because you are relying on something beyond you that does not have a soul, does not have a spirit to do the sacred work of a human. It's treating creativity like something that needs to be efficient and productive and perfect and mass produced when that is the opposite of what it's intended to be. When everyone ends up making the same stuff, the reflection of personhood and individuality that creativity is meant to channel is destroyed. Essentially, over-relying on AI is outsourcing your muse. Also, let's be real here. There's an ethical concern. AI had to learn how to be creative from somewhere because it's not naturally creative because it doesn't it's not real. I had to learn how to produce art and produce stories. Where do you think it learned that from? It was trained on the art and work of living people. Look up anthropic AI settlement. Look up this settlement case if you want evidence that this is happening. This company and I know about this because I'm an author right and I got an email about it. This company literally illegally pirated over seven million books to train its book writing AI model, it's now been forced to pay over 1.5 billion dollars in a settlement suit. Imagine using AI for a creative project only to realize you copied someone else's work who didn't get paid a cent. I don't think that's a good feeling ethically. So to summarize, this is what we have come to. AI is costing our brains on a collective scale their ability to think critically, their intelligence their ability to connect with others their creativity and our sense of humanity and well-being is that the end is that is that it like is that just our destiny do we just have to accept it i don't think so i i don't think so otherwise i wouldn't be making this this episode i think we can have positive and have a positive vision for ourselves in the new ai world by practicing some discernment and continuing to use the skills we know could be overtaken by AI but still feel good for us to practice and make us feel human. So just to quickly finish off this episode I want to go through some ways to counteract the impact of AI on your brain and psychology. Number one, let thinking cause you friction. Friction is what builds skill the way discomfort is what builds muscle. Don't think of friction as the cost of thinking or the side effect of thinking that needs to be eliminated. Think of it as the reward you get for thinking. Your brain benefits from problem solving. It benefits from what we call productive confusion. Productive confusion is when something feels difficult in the moment but strengthens your capacity in the future. There is this tremendous 2014 article by researchers at the University of Boulder, Colorado, called literally the benefits of confusion for learning, which basically tested this theory. It tested this theory that things that are harder and contain contradictions and conflicts in multiple sources teach us more. And what they found was that that is true. Friction in learning promotes deeper understanding and intelligence. one way to practice that the 30 minute rule before you commit to ai before you say i'm an ai this chat gbt this before i'm gonna ask chat commit to 30 minutes of searching listen ai will always be there but this is about values do you value convenience or do you value deep learning and intelligence do you want to be someone who can only do things quickly or someone who can engage deeply independently and it's honestly up to you you have free will but you just have to acknowledge the consequences of being ai dependent here secondly question whether using ai is absolutely necessary in a specific circumstance and question how much time it's actually going to save i think we could all benefit from using ai less um so just be selective with what you use it for i think small tasks like knowing the weather or replying to an email you can probably do that yourself in fact it's probably a good thing if you do a lot of ai researchers will tell you the best use of ai is not minute tasks it's also not complex tasks it's process tasks that reduce cognitive load but still require human reasoning abstraction and thought so examples of this formatting structuring outlining searching when ai is used for micro tasks it risks learned helplessness basically i can't do anything without this when it's used for complex decisions or large large amounts of thinking it risks authority bias you know just it's said this is the case so it must be true and it also causes us to lose a lot of originality you know i'm not stupid as much as I try and avoid AI in my own life because my values I know a lot of us will use it it's about using it consciously because you know hopefully what the cost is the water the energy the brain tax and I think this brings us to our third tip which is to limit how often you're allowed to use per day this is something my good friend Lizzie does obviously she doesn't want to deny herself the efficiency benefits of AI especially since she works in a I was going to say where she works if she works I'm not gonna she works in a corporate environment that is very demanding and everyone is assisted by AI but what she does is she limits herself to three searches a day because it's a priority for her to keep her creative and original thinking skills sharp. She uses it as a tool, which is what it should be used as. Just find the formula that works for you. Tip number four, notice when it's replacing human sources as well. Again, I'm not going to deny that AI isn't amazing for some things. What it's definitely not amazing for is giving you personal advice, relationship advice, therapy advice. when it's telling you how to answer someone's text, when it's telling you how to communicate with another person, that is not how we should be using AI. That is something that I think is sacred. Being able to solve problems between us, being able to communicate and be open and learn from each other. The moment that is replaced by a machine, like I just think all help is lost because empathy is then, empathy is then something that we are saying is so unimportant, it can be offloaded. So those are the tips I think we need to kind of stay grounded in AI. Also just let yourself be bored, let yourself be creative, make sure that you are continuing to tap into what makes you feel human. I think that will just make us feel more positive. It will make us feel more positive about ourselves and about the fact that we are alive and sentient and human and also it will allow us to keep building skills that people who only use AI won't have. And I think that gives you an edge that is going to become even more important in a very AI centric world. Being creative, being a divergent thinker, thinking broadly, being eccentric, that is going to be the next biggest skill in an AI world. So if you've got those, I honestly, I feel positive for your future. I think you're going to be just fine. Thank you as always to our researcher Libby Colbert for her help with this episode. Thank you if you have made it this far. If you have and you are listening on Spotify, leave a little computer emoji down below so I know you're a loyal listener and thank you as always for being here till the end. I really appreciate it. I hope you enjoyed this episode i hope it i didn't come off too strong but also i'm down for a discussion so if you have further thoughts dm me email me however you want to get in touch you can follow us on instagram at that psychology podcast you can also follow us on substack and tiktok and if you are listening and you want to watch the podcast we are on netflix so you can tune in over there if you haven't yet and if you're in the us and canada but until next time be safe be kind be gentle to yourself, be AI conscious. We will talk very, very soon. This is Special Agent Regal, Special Agent Bradley Hall. In 2018, the FBI took down a ring of spies working for China's Ministry of State Security, one of the most mysterious intelligence agencies in the world. The Sixth Bureau podcast is a story of the inner workings of the MSS and how one man's ambition and mistakes opened its vault of secrets. Listen to The Sixth Bureau on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. I'm Nancy Glass, host of the Burden of Guilt Season 2 podcast. This is a story about a horrendous lie that destroyed two families. Late one night, Bobby Gumpright became the victim of a random crime. The perpetrator was sentenced to 99 years until a confession changed everything. I was a monster. Listen to Burden of Guilt Season 2 on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. On the Adventures of Curiosity Cove podcast, when peanut butter disappears from school, Ella, Scout, and Layla launch a full detective mission. Their search leads them back in time to meet a brilliant inventor whose curiosity changed the world. In this Black History Month adventure, asking questions, thinking creatively, can lead to amazing discoveries. Listen to Adventures of Curiosity Cove every Monday from the Black Effect Podcast Network on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. What if mind control is real? If you could control the behavior of anybody around you, what kind of life would you have? Can you hypnotically persuade someone to buy a car? When you look at your car, you're going to become overwhelmed with such good feelings. Can you hypnotize someone into sleeping with you? I gave her some suggestions to be sexually aroused. Can you get someone to join your cult? NLP was used on me to access my subconscious. Mind Games, a new podcast exploring NLP, a.k.a. neurolinguistic programming. Is it a self-help miracle, a shady hypnosis scam, or both? Listen to Mind Games on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. You can scroll the headlines all day and still feel empty. I'm Ben Higgins, and If You Can Hear Me is where culture meets the soul. Honest conversations about identity, loss, purpose, peace, faith, and everything in between. Celebrities, thinkers, everyday people, some have answers, most are still figuring it out. And if you've ever felt like there has to be more to the story, this show is for you. Listen to If You Can Hear Me on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. This is an iHeart Podcast. Guaranteed human.