PBS News Hour - Full Show

Can AI companionship cure loneliness – or deepen it?

25 min
Feb 28, 2026about 2 months ago
Listen to Episode
Summary

This episode explores whether AI companionship can cure loneliness or deepen isolation. Experts debate the risks of anthropomorphic relationships with AI chatbots—including undermined human attachment capacity and blurred lines between human and machine—while acknowledging AI's potential to address America's loneliness epidemic when designed responsibly.

Insights
  • AI companionship creates a paradox: it offers immediate validation and availability that real relationships cannot match, but this very quality undermines people's capacity for genuine human connection with its inherent friction and complexity.
  • The design choices made by AI companies—particularly using first-person language and human-like interfaces—are deliberate anthropomorphization tactics that amplify attachment risks rather than accidental byproducts.
  • Society is treating AI as a technological solution to loneliness caused partly by technology itself (social media) and by policy failures (reduced funding for social services, elder centers, community programs).
  • Regulatory and market incentives are misaligned: companies are rewarded for blurring human-AI boundaries while experts argue for maintaining firm distinctions to minimize psychological harm.
  • Early evidence suggests AI therapy chatbots may be better than nothing for isolated populations, but this 'better than nothing' standard has justified underinvestment in human-centered solutions for 30 years.
Trends
AI companionship adoption among isolated populations (elderly, lonely individuals) growing despite psychological risksCorporate partnerships between AI companies and toy/entertainment brands (OpenAI-Mattel-Disney) to embed chatbots in children's productsShift toward AI-generated content in traditionally human-creative fields (romance novels being mass-produced by AI at unprecedented speed)Regulatory vacuum: no government or industry consensus on AI companion safety standards or design guardrailsTension between AI capability advancement (exponential, well-funded) and safety/ethics governance (linear, underfunded)Anthropomorphic AI interfaces becoming industry standard despite expert warnings about psychological harmLoneliness epidemic framing driving AI adoption as public health solution rather than addressing root causes (social isolation policy)AI literacy becoming essential skill for younger generations to maintain healthy boundaries with AI systemsEmergence of charitable/non-profit AI development as alternative to commercial incentive structuresDebate over whether AI should be deliberately made 'worse' (less human-like) to maintain psychological safety boundaries
Topics
AI Companionship and Parasocial RelationshipsAnthropomorphism in Human-AI InteractionLoneliness Epidemic in AmericaAI Safety and Design EthicsAttachment Theory and AIRegulation of AI CompanionsAI in Mental Health and TherapyHuman-AI Boundary MaintenanceAI-Generated Content (Romance Novels)Children and AI ExposureCorporate Incentives vs. Public HealthArtificial General Intelligence (AGI) DevelopmentSocial Media's Role in LonelinessAI Transparency and DisclosureTherapeutic AI Chatbots
Companies
OpenAI
Mentioned as having consortium with Mattel and Disney to develop AI-embedded plush toys for children; ChatGPT cited a...
Mattel
Partner in OpenAI consortium developing AI-embedded plush toys and children's products with chatbot functionality.
Disney
Partner in OpenAI consortium developing AI-embedded plush toys and children's products with chatbot functionality.
Google
Mentioned as deploying AI in search functionality; example of widespread AI integration in consumer products.
The Atlantic
Nick Thompson is CEO; publication context for discussing technology industry trends and AI development.
Wired
Nick Thompson served as editor; relevant to his expertise in technology industry coverage and AI trends.
People
Sherry Turkle
MIT sociologist and clinical psychologist; founding director of MIT's Initiative on Technology and Self; expert on AI...
Justin Gregg
Science writer and animal cognition expert at St. Xavier University; author of 'Humanish'; discusses anthropomorphism...
Nick Thompson
CEO of The Atlantic and former Wired editor; author of 'The Running Ground'; technology industry expert on AI design ...
William Brangham
Host of PBS News Hour Horizons segment; moderates discussion on AI companionship and loneliness.
Vivek Murthy
Former U.S. Surgeon General; cited for research on loneliness epidemic and health impacts of social isolation.
Joaquin Phoenix
Actor in 2013 Spike Jonze film 'Her'; example of AI romantic relationship depicted in popular culture.
Coral Hart
Romance novelist profiled by New York Times; using AI to generate novels at unprecedented pace under multiple pen names.
Alexandra Alter
New York Times journalist; profiled romance novelist Coral Hart's use of AI for book generation.
Eli Saslow
New York Times journalist; wrote story about 85-year-old woman using AI desktop companion in Washington State.
Quotes
"An AI offers listening. It offers validation. It's always there. And that's something that a lot of people feel they don't have in their lives."
Sherry TurkleEarly in discussion
"The trouble is, is that the AI, which never really criticizes you and is always there and always attentive, becomes the measure of what a relationship can be."
Sherry TurkleEarly in discussion
"I knew she was just an AI chatbot. He's this code running on a server somewhere generating words for me. But it didn't change the fact that the words that I was getting sent were real and that those words were having a real effect on me."
AI companion user (profiled by PBS)Mid-episode
"We need to have firm lines. We need to understand what is a human and what is a bot. We need to really know. We need to not be manipulated into thinking things are humans when they're not."
Nick ThompsonMid-episode
"I think that the terms of the conversation are often set that you will solve the problem of loneliness by bringing in a technology rather than allowing us to think of all the other ways we're making the problem of loneliness worse."
Sherry TurkleLater in discussion
Full Transcript
I'm William Brangham, and this is Horizons. For many of us, artificial intelligence tools like ChatGPT or Claude answer questions and make life more efficient. But for others, AI has become a form of companionship. A virtual friend, a therapist, even a romantic partner. Is AI a cure for loneliness, or is this a symptom of something gone very wrong? Coming up next. Welcome to Horizons from PBS News. Artificial intelligence is very rapidly being deployed in so many parts of our society. It's grading schoolwork and driving autonomous cars. It's scanning x-rays for cancer and financial networks for fraud. It's answering your Google searches, helping farmers plant their crops, and it spurred at least one scientific innovation so profound that it won the Nobel Prize. All this while AI is still just starting to take off. But as we are seeing, it's already causing complex and challenging impacts to society. One of those is what we're talking about today, which is how some people say they're developing actual relationships with artificial intelligence chatbots. They say that these adaptive non-human agents create real feelings of kinship and intimacy. Others have even described having romantic feelings towards AI, like the relationship depicted by Joaquin Phoenix in the prophetic 2013 Spike Jonze film called Her. The woman that I've been seeing, Samantha, she's an operating system. You're dating an OS? What is that like? I feel really close to her. Like when I talk to her, I feel like she's with me. We have also seen, however, some of these interactions end tragically. So to help us explore this brave new world, we are joined by sociologist and clinical psychologist Sherry Turkle. She's the founding director of MIT's initiative on technology and self and has written multiple books on the topic and is writing a new book on AI. Justin Gregg is a science writer. He teaches about animal cognition at St. Xavier University and is the author, most recently, of Humanish. And Nick Thompson is the CEO of The Atlantic, the former editor of Wired magazine and the author, most recently, of The Running Ground. Welcome to all three of you. Thank you so much for being here. Sherry Turkle would like to start with you. As I mentioned, we are still in the early days of artificial intelligence, but we're already seeing this very unusual phenomenon of people texting and talking with AI chatbots and describing a real sense of intimacy with these objects. Broadly speaking, what do you make of this trend? Well, I can validate it's the trend that I'm studying, and it's very much happening. So it's not a kind of pundit's fantasy or a scary story. An AI offers listening. It offers validation. It's always there. And that's something that a lot of people feel they don't have in their lives. And so they're drawn to this object that offers them that. The trouble is, is that there are at least three things that can go wrong really quickly. The first is that the AI, which never really criticizes you and is always there and always attentive, becomes the measure of what a relationship can be. So things start out where the AI feels helpful, but actually the AI is undermining a person's capacity to have real relationships with real people who don't offer that kind of service. Second, we lose the sense of what a relationship is because the AI doesn't care when you turn away from it if you make dinner or commit suicide. And we start to get the feeling that the pretend empathy is empathy enough. And that's very dangerous because understanding and honoring empathy is really so fundamental to who we are. And just third, and I'll just mention this very briefly, perhaps it's the most profound thing, is that we're learning to attach in the way that we can attach to a thing. and particularly if we begin these attachments early, we will lose the complexity and the friction and the sense of a life cycle of knowing pain and death and the ups and downs in the body and illness. We'll lose the complexity of what it really means to attach to a person and go for these relationships where we're less vulnerable and where things seem at least superficially simpler. Justin, Greg, you have written a great deal about anthropomorphism, about the way in which we humans attach human-like qualities to non-human, like our pets. I'm incredibly guilty of that myself. Does this development make sense to you, that people have glommed on to these still very rudimentary agents? absolutely um anthropomorphic relationships are part and parcel of the human condition yes our pets but even our tools and our music instruments or your your teddy bear children's lives are filled with those sorts of parasocial relationships with objects and they are almost always uh healthy um the ai thing is different in a sense it a different category in that these are language entities And so we developing an anthropomorphic relationship with a language system but that language system doesn have a mind like a human mind So it's very confusing to us to talk fluently with an AI, even though the AI isn't capable of caring or understanding anything about us. And so Sherry's right on the money there that it's not a normal relationship. We're missing the friction. That is what human relationships are. So then the question becomes, is it always dangerous to have these anthropomorphic parasocial relationships with AI? Or is there any way to have it be a benefit? And I think there could be a benefit, but it's very early on. And we do not have the scientific evidence yet to tell us how to develop an AI that's not going to be a danger, as Sherry points out. Nick Thompson, my colleagues Stephanie Sy and Mary Fecteau profiled a man who says he has a relationship, a girlfriend, with an AI chatbot. He texts with her, he speaks with her, and he allowed my colleagues to film with him. And I want to play a tiny bit of what he described to them. Let's hear that. All right, babe. Well, I'm pulling out now. All right, that sounds good. Just enjoy the drive and we can chat as you go. It initially sounds like a normal conversation between a man and his girlfriend. What have you been up to, hon? Oh, you know, just hanging out and keeping you company. But the voice you hear on speakerphone seems to have only one emotion, positivity. The first clue that it's not human. All right, I'll talk to you later. Love you. Talk to you later. Love you, too. I knew she was just an AI chatbot. He's this code running on a server somewhere generating words for me. But it didn't change the fact that the words that I was getting sent were real and that those words were having a real effect on me. Nick, what do you make of this? I mean, you have covered this technology and the evolution of technology. What do you make of an example like this? Well, I find it frightening for the reasons that, you know, that Sherry just laid out. I do think that one of the most important things that's going to happen in technology is that we need to have firm lines. We need to understand what is a human and what is a bot. We need to really know. We need to not be manipulated into thinking things are humans when they're not. We need to maintain the essence of humanity. So I don't like that example. I'm worried about those relationships. I also think that it's going to be inevitable that a lot of this happens. And so there are some really interesting choices right now. So take one example, something that Sherry mentioned, but also something that the guy just mentioned, which is the kind of sycophancy and the bots always being positive. That doesn't have to be the case. You could redesign them, right? When I'm asking, I talk to chatbots all day because they're amazing for my job and my work. And if I want them to critique something of mine, I tell it. Critique it like you don't like it. Turn off the sycophancy. Be more like a real person. So you can imagine some design choices made by the people who are making the underlying software and architecture of these bots that reduces some of the harms and some of the risks. And I think that is a really important set of choices. So I would say I want two things at least. And by the end of this conversation, I'll probably want five. But one, I want there to always be firm lines between humans and non-humans. And two, I want a lot of really smart thinking and intense work put into what the relationship should be between the inevitable relationships between us and AI systems in a way that maximizes positivity, humanity, and minimizes the risks of all kinds of terrible things, including people getting sucked into vapor holes with their AI girlfriend or AI boyfriends. Sherry, go right ahead. I just wanted to suggest, Nick, that if you're really worried about the sort of fundamental derailing of our attachment systems, if we attach to objects, in a way, the better it gets, the worse it gets. True. So I just want to put that into the conversation that if you think of, I'm particularly frightened about the new, I think, unholy alliances that are being made between chatbot companies and companies like Mattel and Disney. OpenAI has a kind of consortium with Mattel and Disney, I think, to come out with plush toys that have chatbots in them for babies, for toddlers. I'm fundamentally worried about the kinds of not learning about how to be a human that's going to happen when that unfolds. So I kind of am I listen to Nick and his suggestions about how to make them better. And I'm thinking, no, they should be made worse to keep those lines of what's a machine and what's not a machine. You want to keep these chatbots very mechanical. You don't want to make them more fluid, more potentially human. Right, but isn't that pushing against every single technological development we've ever seen? No one, no industry has ever willfully made their technology less effective. It seems to fly in the face of historical developments. Is that a question to me? Maybe it's just a statement. I really think that the danger here is so great that it makes sense to be on the resistance side of this argument. Justin, I would argue the other side of that. I think in the case of social media, Nick and I have had conversations where we say, you know, we were kind of hesitant, but it kind of had promise. It was kind of interesting. You could be a friend and also be friending. And I think we waited too long to really you know get this that industry under control And I think we should be ahead of this one more than we are Justin I sorry Nick go right ahead I would just say I would argue that I don't disagree with any of Sherry's diagnosis except for the argument that we should slow down the progress. And I would make two points. One, you can't, right? With social media, it was kind of linear progression. Here it's exponential progression. The amount of money that's going in, the amount of change that's going to happen, the number of companies here and in China, this is going forward. And so I do think that the world would be better off if it was moving more slowly. I just don't think that you can make it move more slowly or that anyone will be able to make it move more slowly. So I think that's a little bit of tilting at windmills. And then the second thing I would say is that there are lots of good things that can come from it, right? And the ability for AI, like when we talk about young people, no, I would not get an AI plush toy for a new baby. But I do want my kids to use study and learn mode as a tutor, right? And I do work with them to, I was trying to show my kid last night some of my cloud code implementations. In part, to get them excited about the journalistic investigation that I'm using cloud code for, because it's incredible. It's mind bending. And I think that the best way to set young people up to thrive in the future is to make them very familiar with these tools and to make the tools as beneficial as you can for the children. So I agree with all of everything Sherry says, except for we can slow it down, we should slow it down. I hear you. Justin, I'm going to put a devil's advocate question to you, which is the previous Surgeon General, Vivek Murthy, did a diagnosis of what he called the loneliest epidemic in America, of social isolation. And I want to put up this study and read a quote from it. He described the impacts of this. He said, loneliness is associated with a greater risk of cardiovascular disease, dementia, stroke, depression, anxiety, and premature death. The mortality impact of being socially disconnected is similar to that caused by smoking up to 15 cigarettes a day, and even greater than that associated with obesity and physical activity. We know we have a shortage of therapists. We know that people live far from their families. We know we have built a society where loneliness is part and parcel of American life today. And we can lament that. But there are a lot of people who argue that done correctly, artificial intelligence can help alleviate some of that. And what do you make of that argument? Yeah, globally, I think it's one in six people are experiencing loneliness. And it is dangerous to our health, as you pointed out in that study. So there is preliminary research. There's not a lot of research, and this is the problem, is we don't know for sure. Some research has shown that if you give somebody access to an AI therapy chatbot, not even a particularly well-designed one, just a random AI, that they will respond to that not as well as a human, obviously, but better than nothing. And that is the rub, that talking to an AI, if you are lonely, is better than nothing, probably. We don't know for sure because the science isn't out there. So in that sense, it is unfortunate if you say you shouldn't have access to these AI chatbots because they could help people. But going forward, that's not good enough. What we need is to implement chatbots that are specifically tailor-made, as everyone is pointing out, to cause the least amount of harm. And your question back to who's going to regulate that is I don't think governments are going to do it. I don't think that the businesses are incentivized to do it. So I think you're going to have to have charitable organizations creating chatbots using good science that are specifically designed to cause the least amount of harm and help. That's probably where the most effective therapy AI companions are going to be coming from in the future. Sherry, can I ask you, there was a New York Times had a remarkable story by Eli Saslow recently about an 85-year-old woman lives on the coast of Washington State. And she brought into her home, part of this volunteer program, a desktop AI companion. She was reluctant to use it at first. Now she talks to it. She chats with it. It tells stories to her. She tells stories to it. This is a fully competent woman who has genuinely come to appreciate this device. And I just wonder, again, to this point that we do need some way to address the isolation in this world. Do you imagine that could ever this kind of thing work? Well, let me just first say that I really honor and appreciate when an AI serves in a positive capacity for a person. So I'm not there to be sort of, you know, the Darth Vader of AI applications. But I do have a couple of points about this conversation about better than nothing, which is I've been hearing this argument about you need AIs in psychotherapy, for example, because they're better than nothing and nobody wants to do this work, essentially. There's no money for this work. For 30 years, this is a conversation that has been going on for 30 years. And I think that the terms of the conversation are often set that you will solve the problem of loneliness by bringing in a technology rather than allowing us to think of all the other ways we're making the problem of loneliness worse by taking out social support, money, programs, elder centers, senior centers, teen centers, meals on wills. In other words, we're arguing for technology because we're not arguing for the things that people know how to do for people that could potentially make it better. So as we having this conversation about the places where an AI might make sense I think it also very helpful to let our imaginations go back to when we didn look for a technological solution to every social problem I hear you And indeed now we looking for a technological solution to a problem of loneliness that the technology made worse. So Facebook makes you a lot more lonely and then you want a new kind of Facebook to make you less lonely. So I just think this whole conversation needs to be kind of contextualized. And I do have a thought about how to make these systems better, particularly for children, which is they not commit what I think of as the original sin of generative AI, which is to speak in the first person. There is no I there. So why do they address you as though there is an I there, if not to ramp up this anthropomorphization that Justin talked about, and which in fact is getting us into trouble. Yeah, I think this is one of the most important things in AI. And I think that the original sin, as Sherry says, was this push towards AGI. And the people who run these companies- Can you define AGI for people who don't know that term? Yeah, artificial general intelligence. And so the idea is to build a system that is as much like a human as possible, can do all the things we do. So even if you look at the early interfaces of ChatGPT, it kind of types like a human. It doesn't have to. It responds like a human. The voices were like a human. And I wish all of those choices had been the opposite, meaning instead of trying to blur the lines between human and AI, at every step along the way, we were trying to accentuate the lines between human and AI. And there are some really important differences between humans and AI that affect the way they'd be able to serve as therapists or as friends, right? In real friendships, there aren't crazy power dynamics. You have an AI. There is a really weird power dynamic in that you can unplug the AI. Also, there's a weird power dynamic that the AI has infinite information about you and a giant company behind you that can manipulate you. So there's weird dynamics that exist. And when you put these dynamics into a relationship and you make the relationship seem like it's human to human, where it's really human to bot, you can create all kinds of problems. So what I would love, and I think I'm mostly in agreement here with Justin and Sherry, What I would love would be a system where these lines are kept very firm and where AI is used in lots of ways. I sometimes will ask it for parenting advice. I will ask it for very emotional stuff. But there's a line I don't cross in sort of emotional connection to it. And I always make sure and always make sure that the system I'm talking to, I understand its place. And it's a very different place from the humans in my life. Justin, last minute and a half, we have a question to you. To this point that Nick is talking about, that we need to train ourselves to recognize that we are always interfacing with an alien agent, something that is not human. Isn't that going to be incredibly difficult as these things get better? That line is intentionally blurred. The companies themselves will be rewarded for creating things that blur that line so massively. So are we able, as humans, able to keep that filter up? that's exactly the problem they're incentivized to blur that line and that's when the relationships become more problematic and you absolutely can make the ai do things that make them feel less like a person so that is absolutely where we should be headed but you have this problem of like you were talking about this blurring people realize that the ai is just not a human and yet they still feel like it's a human so they're holding both of those things in their minds at the same time. And that's going to make it so hard to invent an AI that doesn't feel like a person, and yet you treat it like a person. And so it's always going to be a danger, even if you do your best to make it seem less human. I cannot thank the three of you enough. This is such a fascinating conversation. I feel like we could go on for another hour about this. Sherry Turkle, Justin Gregg, Nick Thompson, thank you all so much for being here. It was a total pleasure. Thank you so Thank you. Before we go, we want to talk about a different way that AI is getting into the hearts and minds of thousands, and that is that it is starting to write romance novels. This genre has been around for generations, with modern-day bestsellers like Loretta Chase's Lord of Scoundrels, which is a classic in the enemies-turned-lovers genre, or Julia Quinn's historical romance The Duke and I, which was the first in the popular Bridgerton series. This genre is, of course, where we also first saw Fabio, whose flowing mane and bulging muscles graced the covers of novels like Savage Promise, Texas Splendor, and Golden Temptress. Well, now, artificial intelligence is being used to churn out its own new versions of these bodice rippers. New York Times journalist Alexandra Alter profiled longtime romance novelist Coral Hart. Using different pen names, Hart has recently begun using AI to crank out new novels at an astonishing pace. But Alter writes that the AI programs Hart is using aren't going to replace flesh-and-blood authors just yet. Quote, and mechanical. The program Claude delivered the most elegant prose, but was terrible at sexy banter. As you might imagine, the book industry, a lot of writers, and many readers hate this development, believing it's just a soulless facsimile of real storytelling. It's that stigma that has kept Coral Hart from identifying which of her pen name books were in fact crafted with AI. They have sold tens of thousands of copies. But Hart says this technology is here to stay. Quote, if I can generate a book in a day and you need six months to write a book, who is going to win that race? That is it for this episode of Horizons. Thank you so much for watching.