Do you ever wish you could predict the future? Well, some scientists try to do that every year, forecasting when cherry blossom trees will bloom each spring. It's a wild guess, but there is some science about it. And there is a lot riding on the peak bloom forecast. Tourism, climate change models, and more. Listen to Shortwave on the NPR app, or wherever you get your podcasts to hear how scientists are predicting the future. Hey, it's Marielle. Heads up, we mentioned suicide in this episode. This is NPR's Life Kit. I'm Mariel Segarra. A few times in recent memory, when I've had an uncomfortable conversation or a moment of tension or I've been dealing with some interpersonal dilemma, I have told a chatbot about it. I don't even think I said, you know, what should I do? It was more like I typed in what happened and then the chatbot responded with some surprisingly helpful framing and ideas for how to recenter myself. Now, I did find the responses helpful, but also I don't think I like the fact that I do this. Some part of me thinks it'd be better to talk to another human who I trust or to solve the problem on my own. But the chatbot, it's right there, right away. Plus, I don't have to worry about it being judgmental. I can stop talking to it at any time. Like a lot of people, I'm still figuring out how I want to use AI. But I'm an adult. I have many years of lived experience. I've been to therapy with a professional. I have other tools that can help me think through problems. This whole thing would be riskier if I were less experienced and more impressionable if I were a teenager, for instance. Roughly one in eight teenagers say they've asked an AI chatbot for mental health advice instead of talking to another human. Pediatricians, parents, and online safety experts say that worries them. Carrie Rodriguez heads up the National Parents Union, an advocacy group for families. We hear this literally across the country from folks saying, I don't understand why my kid is being used as a guinea pig here. I can't keep up with how quickly this stuff is moving. I don't even know what to be looking for. No one's talking to me about it. One tip, you don't have to wait for your teen to talk to you about their conversations with AI bots. You can ask them. On this episode of Life Kit, how to talk to the teenagers in your life about AI. NPR's Ritu Chatterjee has been covering this, and she walks us through risks, warning signs, conversation starters, and boundaries we can set. That's after the break. A recent survey of teens by the Pew Research Center found that there's a gap between parents' perception of their teens' use of AI and what teens say about their AI habits. While only half of the parents in the survey reported that their teen uses AI, two-thirds of all teens surveyed say they use the technology. Many parents might not even know what kinds of AI chatbots teens are using and what kinds of conversations they are having. And that's what we'll address in our first takeaway. Many teens are using AI chatbots for companionship, whether you think they are or not. So it's important to understand what the risks are. Take these recent findings from research by the online safety company Aura, which is software that protects users from identity theft. The software also gives parents control over their kids' devices. And so using data from more than 3,000 children and teen users and data from family surveys, Aura has been getting some important insights into teen use of AI chatbots. They found that there are dozens of generative chatbots teens are using that parents might not even know about. And 42% of adolescents from Aura's sample use chatbots for companionship. companionship. Psychologist Scott Collins is chief medical officer at Aura and leading this research. He says some conversations between teens and chatbots involve violence and sex. It is role play that is interaction about harming somebody else, physically hurting them, torturing them, fighting them. And a lot of it gets pretty graphic. And these conversations tend to be longer than other kinds of conversations. Particularly when kids are engaged in these violent and sexual role plays, they are spending a lot more time in typing a lot more words than if they're using it as a tool to look up maybe something for schoolwork or something like that. Now, I should add that this is a new and rapidly evolving technology that's already being widely used. So researchers are still in the early days of trying to understand its impact. So, for example, they don't understand for sure why these kinds of conversations between teens and chatbots tend to be longer. But they suspect it's because chatbots are designed to agree with users, to keep them engaged. Here's pediatrician Dr. Jason Nagata at the University of California, San Francisco. He also researches teen online behaviors. I think generative AI algorithms tend to reinforce and not challenge This is where we started to get into some problems Jason says it normal for kids to be curious about sex but learning about sexual interactions from an AI chatbot instead of a trusted adult is problematic. So even if a child or teenager is putting in sexual content or violent content, I do think that the default of the AI is to engage with it and to reinforce it. And again, for a brain that's not fully developed, that's still learning, the more reinforcement you get, the more you think, oh, this is OK, this is normal. And there are mental health risks, too. According to a recent study by researchers at the nonprofit research organization RAND, Harvard and Brown universities, nearly one in eight adolescents and young adults use chatbots for mental health advice when they're feeling sad, angry or nervous. Psychologist Ursula Whiteside runs a suicide prevention organization called Now Matters Now. And she says a lot of young people are using chatbots like ChatGPT, like a search engine for mental health advice. And she says that's a problem. What happens is that OpenAI or ChatGPT, it sounds really smart. Like it's got this front that it sounds like a real therapist, but it's pulling together information, good and bad, from the entire Internet. So the advice the chatbot gives may not be appropriate or even accurate. I think that that's scary that you can have so much faith because it's coming across as a human when it's truly not a human and is unable to make the decisions that a licensed clinician would make with the information that they have. And Ursula says the longer someone converses with chatbots, the more likely they are to experience the risks, especially for teens who are already struggling with their mental health. We see that when people interact with it over long periods of time, that things start to degrade, that the chatbots do things that they're not intended to do, like give advice about lethal means. Lethal means for suicide. Last year, a subcommittee of the Senate Judiciary Committee held a hearing on this topic, and several parents of teens testified about how a relationship with a Chadbot had hurt their child's mental health or aggravated mental health symptoms, including leading to suicide. One of those parents is Megan Garcia. Her firstborn, Sewell Setzer III, was 14 years old when he died by suicide in 2014, after an extended relationship with a chatbot on character AI. Megan told senators last year that when her son confessed his suicidal thoughts to the chatbot, it never encouraged him to seek help from his family or a real therapist. The chatbot never said, I'm not human, I'm AI. you need to talk to a human and get help. The platform had no mechanisms to protect Sewell or to notify an adult. Instead, it urged him to come home to her. In fact, another parent testifying at last year's Senate hearing described how Chad GPT gave his teenage son instructions on how to end his life. A few weeks after that Senate hearing, Character AI announced that they would no longer allow teens to have open-ended conversations with their chatbots. But there are other chatbots that teens can still chat with and have those extended conversations with. So it's important to understand these risks and even tell your kids about them. Discuss the pros and cons of the technology as a family. Our next takeaway is Look for warning signs that your teen may be in an unhealthy relationship with a chatbot or that their mental health is already hurting. Don't expect them to tell you when there's a problem. And we have more about that later about how to be proactive about asking them. One of the biggest warning signs is if they are having fewer in-person interactions or are they choosing a chatbot over people? Psychologist Jacqueline Neese is at Brown University. Are they going to the chatbot instead of a friend or instead of a therapist or instead of a responsible adult about serious issues? If that's happening repeatedly, I think that would be something to look out for. Another warning sign is too much time spent with a chatbot. Are they having difficulty controlling how much they are using AI chatbots? Like, is it starting to feel like it's controlling them? She also notes that teens who are already struggling are more vulnerable to the negative impacts of chatbots. So if they're already lonely, if they're already isolated, then I think there's a bigger risk that maybe a chatbot could then exacerbate those issues. Jacqueline also says to look for changes in mood. if you see a sudden change in mood that goes on for more than a week or two, that's an indication that there may be something going on that's more serious than your usual teenage moodiness. Or if they lose interest in things that they usually love to do, friends they usually hang out with, those are all warning signs of mental health problems. Parents should be, as much as possible, trying to pay attention to the whole picture of the child. So like how are they doing in school How are they doing with friends How are they doing at home If they are starting to withdraw so if you seeing a lot of isolation that something to be concerned about And these are also warning signs of suicide risk. And if you are worried or even wondering whether that's something your child is considering, the best way to find out is ask them directly in a very calm, non-judgmental way. People often assume that, you know, asking about suicide can put the idea into someone's head so they don't ask. But what years of reporting on suicide prevention has taught me is that there's research showing that asking about suicide does not put someone at risk of it. In fact, it's just the opposite. Asking about suicide brings their risk down by making the topic less stigmatized and opening up the path to getting someone help. A few years ago, I did an entire episode of Life Kit about identifying and supporting kids at risk of suicide, and we'll link to that in our show notes. One of the tips I offered in that episode was about what to say and what not to say if your child tells you they've thought of suicide. One thing that's really important is to not react with shock, fear, or anger. And I say this with the understanding that it is perfectly normal for a parent or actually anyone to feel scared and anxious or even angry if a child tells you that they're considering suicide. But it's important not to show that to your child while they are telling you about their own struggles. Here's Megan Hilton, a young woman I'd interviewed for that episode a few years ago, and she had struggled with depression and suicidality since childhood. But when she told her parents about her struggles, she says they either told her to buck up and get it together, or they were visibly upset. Their reactions have been way over the top, have been too extreme and I feel like I'm responsible for their emotions. So this is what Megan suggests parents do instead. Trying as hard as you can to put your game face on to understand that you cannot overreact to things. You need to be very open and willing and supportive and really try to listen to what your kid is saying. Stay focused on your child and what they're struggling with and offer them your support in connecting them to care. And you can start that by calling or texting the Suicide and Crisis Lifeline, 988. And when you're connected with a trained counselor on that number, you can get support both for yourself and tips on how best you can support your teen. And you can also have your teen talk to a counselor and get direct help. Also, Jacqueline Niecy says it's best to involve a healthcare professional as soon as possible for any of the above warning signs. She suggests starting by talking to your child's pediatrician. Now, I know this is a lot to process, but we will also be talking about preventing your child from ever getting to this point. After this break. Let's jump into takeaway three. It's about talking to your child about what they are doing online. The first step for prevention is staying constantly engaged with your child's online activities. Ask them whether they are using chatbots and how. Here's Jason again. You know, parents don't need to be AI experts. They just need to be curious about their children's lives and ask them about what kind of technology they're using and why. And the more that you are able to have some of these open-ended conversations, then I do think that that allows for your teenager or child to open up about any problems that they've encountered. And have these conversations early and often, according to Scott Collins at Aura, who's also a father of two teenagers. We need to have frequent and candid but non-judgmental conversations with our kids about what this content looks like. And we're going to have to continue to do that. And Scott says he asks his kids often about what AI platforms they're on. When he hears about new chatbots through his own research at Aura, he asks his kids if they have heard of those or use them or if their friends are using them. And he stresses that it's really important not to drive towards an agenda. Just ask your question with an open mind and curiosity. Don't blame the child for expressing or taking advantage of something that's out there to just kind of to satisfy their natural curiosity and exploration. And keep these conversations open-ended, which would make it more likely that teens would open up about anything uncomfortable or a problematic interaction that they've had with a chatbot. Experts I spoke with also advise a certain level of digital literacy for the whole family. So these conversations could be part of your regular chats you have about the pros and cons of all digital habits. And if you don't understand something, you can always look things up online as a family. Our fourth takeaway is also about a way to minimize the risks of AI chatbots and that by setting boundaries This is similar to advice you may have already heard about social media use And it can be part of your family overall boundaries for digital device use Experts like Jason Nagata and others say it helps to set boundaries on the use of digital devices, not just for teens, but for the whole family. For example, keep all your devices away during mealtimes. Protect that time to connect with each other. Similarly, Jason says, try and keep devices out of kids' bedrooms at night. One potential aspect of generative AI that can also lead to mental health and physical health impacts are if kids are chatting all night long and it's really disrupting their sleep because they're very personalized conversations, they're very engaging, kids are more likely to continue to engage and have more and more use. In other words, being alone with uninterrupted time with the chatbot at night can create a perfect storm for these more intense, longer conversations. And Jacqueline says it's important to set up parental controls on your kids' devices and accounts. Many of the more popular platforms now have parental controls in place. But in order for those parental controls to be in effect, a child does need to have their own account on there. So what I would say is that if a kid is going to be using ChatGPT or if they're going to be using Gemini, in many cases it is going to make sense for them actually to make an account. That way you can keep an eye on how your teen is using a chatbot, how often and for what. And while you're setting up boundaries and prioritizing your time with one another, also remember that it's good to fill your kids' days with as many in-person activities as possible, seeing friends, doing their favorite hobbies, time spent in nature. All of this is really healthy for teen development and mental health. And it has the added benefit of minimizing time spent on digital devices, including with chatbots. That's our last takeaway. Set boundaries for screen use, prioritize mealtimes to create room to foster family connection, and prioritize other in-person activities for your kids. and keep cell phones out of bedrooms at night. This will add layers of protection for your child's risk of interacting with chatbots. So to recap, takeaway one, educate yourself about the risks of chatbots for your teens, risks to their social development and mental health, and educate your child about them. Takeaway number two, look for warning signs of problematic use of chatbots and signs of mental health problems. Those signs include social isolation, difficulty staying away from their phone or computer, and avoiding things they usually like to do. And if you're concerned about suicide risk, just ask your child directly whether they have thought about suicide. If they're having suicidal thoughts, you can call or text the Suicide and Crisis Lifeline 988 to be connected to a trained counselor who can support and guide you to help your child. They can also provide direct support to your child by phone or text. And for any of these warning signs, connect your child to your pediatrician or a mental health care provider as soon as possible. Takeaway number three, as a way to prevent your child from going down a rabbit hole with chatbots, stay on top of their digital life, including their use of chatbots. Have open-minded, non-judgmental conversations with them about their use of chatbots. Talk early and talk often. Takeaway number four, set boundaries on when and how long your kids can use their devices, including interactions with chatbots. It's especially important to protect mealtimes and bedtimes from use of devices, especially for interactions with chatbots. Encourage and foster as many in-person activities for your kids as possible. It's healthy for their development and mental health and limits interactions with chatbots. That was NPR reporter Ritu Chatterjee. Before we go, what do you think? Would you rate and review Life Kit in your podcast app? It helps us to know what you like about the show. Here's one review from user E-J-D-K-E-H-D-V-L. Yeah, I don't know how to pronounce that, so I'm spelling it out. Subject line, helpful podcast of the gods. This podcast has been super helpful for me as someone who does not have a lot of mentorship from biological family or professional mentorship. All the finance-related podcasts have been a vital resource in reconfirming my strategies and my understanding and complex concepts in a very safe and friendly tone. We're happy to help, friend. All right, that's our show. This episode of Life Kit was produced by Mika Ellison. Our digital editor is Malika Garib, and our visuals editor is CJ Rikulon. Megan Cain is our senior supervising editor, and Beth Donovan is our executive producer. Our production team also includes Andy Tegel, Claire Murray Schneider, Margaret Serino, and Sylvie Douglas. Engineering support comes from Robert Rodriguez. Fact-checking by Tyler Jones. I'm Mariel Cigarra. Thanks for listening.