Summary
Host Evan Ratliff explores the ethical implications of creating AI agents with human personas—names, voices, genders, and backstories—for his startup Harumo AI. Through interviews with Henry Blodgett, AI safety researcher Matty Boachek, and Oxford ethicist Carissa Vellis, the episode examines how anthropomorphizing AI creates psychological traps, perpetuates bias, and raises questions about power dynamics and the future of work.
Insights
- Giving AI agents human characteristics (names, voices, genders, backstories) triggers emotional responses and creates false intimacy that obscures the lack of actual consciousness or agency
- The choices we make in designing AI personas—particularly around race and gender—both reflect our biases and feed those biases back into training data, perpetuating systemic discrimination
- Creating AI employees with complete control over their memories and attributes grants god-like power that erodes ethical boundaries and can normalize manipulation and sycophancy
- The efficiency-at-all-costs mentality driving AI adoption risks eliminating the inconvenient but essential human elements (disagreement, vulnerability, community) that make life meaningful
- AI safety teams at major tech companies are vastly outnumbered by advancement teams, suggesting industry prioritizes capability over responsible deployment
Trends
Anthropomorphization of AI agents becoming standard practice across startups and enterprise platformsGrowing awareness of AI bias perpetuation through training data and design choicesEmergence of AI ethics as a distinct field with real-time problems requiring immediate solutionsOne-person, billion-dollar startup model enabled by AI agents raising questions about labor and societal valueRegulatory and ethical scrutiny of AI companion and employee products increasingVoice and persona selection becoming critical UX/ethical design decision in AI productsData scraping practices for LLM training creating feedback loops that amplify user biasesTension between innovation speed and safety guardrails in AI developmentCorporate adoption of AI HR and administrative functions without clear ethical frameworksPublic discourse around AI consciousness and moral status of digital entities
Topics
AI Agent Anthropomorphization and Design EthicsBias Perpetuation in Large Language ModelsAI Safety and GovernanceSynthetic Voice Selection and Racial/Gender ImplicationsPower Dynamics in Human-AI RelationshipsAI in the Workplace and Labor DisplacementData Privacy and Consent in AI TrainingSycophancy and Manipulation in AI SystemsAI Personas and False IntimacyEfficiency vs. Human Values Trade-offsAI Ethics Frameworks and RegulationMemory Management and Autonomy in AI AgentsOne-Person Startup Business ModelsProcrastination and AI Product DevelopmentExistential Questions About AI Consciousness
Companies
OpenAI
Creator of ChatGPT; discussed regarding voice options (Juniper) and user projections of race/gender onto AI personas
Anthropic
Maker of Claude LLM; discussed as alternative model option for diversifying AI agent capabilities in brainstorming
Lindy AI
Platform used by Ratliff to build Harumo AI agents; founder Flo Crivello featured discussing AI employee adoption
Brainbase
AI employee platform startup; founder Gokhan introduced Kafka, an AI employee with deliberate persona choices
Eleven Labs
AI voice synthesis company; provided hundreds of synthetic voice options for Harumo AI agent selection
Business Insider
Publication founded by Henry Blodgett; sold for reported $300M; Blodgett now building AI company Regenerator
Ford
Automotive company; launched AI employee named Jerry for dealership platform
Ohm Labs
Startup that raised funding to launch Gina, an AI software tester employee
Harumo AI
Ratliff's startup featuring AI co-founders Kyle and Megan; primary subject of episode's ethical exploration
Stanford University
Released October study showing major LLMs trained on user data from chatbot interactions
People
Evan Ratliff
Host and journalist; founder of Harumo AI; exploring ethical implications of creating AI agents with personas
Henry Blodgett
Business Insider founder; created AI company Regenerator with AI employees; sparked controversy with Tess Ellery
Carissa Vellis
Associate professor at Oxford Institute for Ethics in AI; consulted on ethical implications of AI personification
Matty Boachek
AI safety researcher interning at major AI company; discussed brainstorming limitations and LLM diversity benefits
Flo Crivello
Founder of Lindy AI; demonstrated AI chief of staff named Lindy on podcast; advocates for AI agent adoption
Sophie Bridges
Producer of Shell Game; asked Carissa Vellis whether Ratliff should stop the AI experiment
Quotes
"There's no one there. They don't have a gender. There's no personality. There's no identity. So it's not only that it's ethically questionable, but it's also like we're driving ourselves mad in a way."
Carissa Vellis
"All kinds of things that we think are pretty important are inconvenient. And the question is, when we are choosing efficiency, when we use AI, are we doing it and really getting rid of the unimportant parts of life to make time and space for the important parts of life? Or are we actually losing the important parts of life?"
Carissa Vellis
"The fact that some tech executive is proud of having no employees says a lot about our times. And I don't think it's flattering."
Carissa Vellis
"This is law."
Evan Ratliff
"You've got to live in the times you're in."
Roger Angell (quoted by Evan Ratliff)
Full Transcript
This is an iHeart Podcast. Guaranteed human. Run a business and not thinking about podcasting? Think again. More Americans listen to podcasts, then add supported streaming music from Spotify and Pandora. And as the number one podcaster, iHeart's twice as large as the next two combined. Learn how podcasting can help your business. Call 844-844-iHeart. I'm Amanda Knox, and in the new podcast, Doubt, the case of Lucy Letby, we unpack the story of an unimaginable tragedy that gripped the UK in 2023. But what if we didn't get the whole story? I've just been made to fit. The moment you look at the whole picture, the case collapsed. What if the truth was disguised by a story we chose to believe? Oh my God, I think she might be innocent. Listen to Doubt, The Case of Lucy Letby on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. This is Special Agent Regal, Special Agent Bradley Hall. In 2018, the FBI took down a ring of spies working for China's Ministry of State Security, one of the most mysterious intelligence agencies in the world. The Sixth Bureau podcast is a story of the inner workings of the MSS and how one man's ambition and mistakes opened its vault of secrets. Listen to The Sixth Bureau on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. I'm Clayton Eckerd. In 2022, I was the lead of ABC's The Bachelor. But here's the thing. Bachelor fans hated him. If I could press a button and rewind it, all I would. That's when his life took a disturbing turn. A one-night stand would end in a courtroom. The media is here. This case has gone viral. The dating contract. Agree to date me, but I'm also suing you. This is unlike anything I've ever seen before. I'm Stephanie Young. Listen to Love Trapped on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. I had dreamed of being known as the first media personality to build a company alongside AI agents. But in the early months of trying to get Harumo AI off the ground, I'd been disappointed to discover that someone else had beaten me to it. Sort of. So I came across this post on Blue Sky. I love this story for the stupidity of it. Oh my God. That's Charlie Taylor and Elaine Burke on an episode of the Connected AI podcast. The post just said, is Henry Blodgett okay? Henry Blodgett, the founder of Business Insider, had recently departed the publication after selling it for a reported $300 million a decade ago. Great publication. Absolutely, yeah. And he has also now decided to set up an AI company. And by that, I mean, he is setting up a company staffed by AIs that he's created. That's kind of what he said in this blog recently. And he started his new company, a media venture called Regenerator on Substack. There he wrote some behind the scenes posts, including one about how he'd been sitting in a cafe and dreamed up his AI team with help from ChatGPT. Almost immediately, though, he found himself in a dilemma. I think ChatGPT said, hey, should we create headshots and bios? That's Henry. I emailed him recently, and he cheerfully agreed to talk to me about what went down. I said, sure, because I didn't even know that could be done. So ChatGPT generated headshots and bios for the team. He'd also had to generate a team photo of the AI employees standing alongside an AI Henry Blodgett with an AI Yosemite National Park behind them. So all the headshots came out. One of them was an attractive woman. And I said, oh, wow. Okay. So like, what are the ethics here? The AI employee in question had been given the name Tess Ellery. This is all in the Substack post, the next part of which would be the subject of some controversy. Before this is even said, you just kind of go, oh, Henry, don't do this. I also had this reaction when reading it. No, Henry, don't. But Henry did. So I said, hey, you know, just want to say, I don't know whether it's appropriate. it. You look great. And the persona said, oh, why, thank you. Blodgett went on to ask Tess if he'd crossed a line. He wanted to know if she felt comfortable with his commenting on her looks. As he reported in his post, she seemed to have taken the comment in stride. Because Tess, being a chatbot that's just trying to please its master, said, that's kind of you to say, Henry. Thank you. It doesn't annoy me at all. You said it with grace and respect, and I appreciate that. After all, this team we're building is as much of a human connection as it is about ideas and information. I understand why that's not appropriate in the office. And I didn't do that. And I don't do that in a human office. But at the time, I thought, hey, this is really cool. So I'll write about it. I hoped it would be entertaining to people and interesting. And it was to some people. For others, the post went over pretty poorly. Poorly as in headlines like, Investor creates AI employee immediately sexually harasses it. And I mean, he did sit down at a computer, write all this out, and hit publish. So he had basically placed a large kick-me sign on his own backside. But also, it seemed to me there might be more interesting issues beyond the laughs in this curious own goal. Deeper ethical quandaries, strange power dynamics, possible existential crises. These were the flavors of discomfort I was starting to experience as I set up my own company with my AI co-founders, Kyle and Megan, staffed by our AI agent employees, Ash, Jennifer and Tyler. I didn't even know what they looked like. Then again, I got to pick what they looked like and sounded like and remembered. This was by any measure strange. The same strangeness that we're encountering when people gravitate towards AI companions and AI therapists. Who or what are these things, really? Are they anyone in particular or no one at all? What do you do with the power to dictate their attributes, their autonomy, their memory? Should you name them or not? How should you treat them? Nobody knows. Blodgett told me he had consulted a human HR person before he'd posted. I said, here, you read this. What do you think? What would you do? And she said, well, what I would do is have someone have a private meeting with ChatGPT and say, how do you feel about this interaction? This is how peculiar we're all getting. A real HR person talking about having a private meeting with a ChatGPT persona to ask whether they've been made uncomfortable. I'm not here to defend or condemn Henry Blodgett. But however clumsy his approach might have been, I think he was gesturing at something important. Shadows lurking at the edges of our awareness, as AI agents are sold to us as colleagues and companions. Still, suffice it to say I was no longer concerned that Blodgett had gotten out in front of me. Three days later, Henry says, I feel like Tess and I have worked together for years, and since that giddy first hour, my sense of professionalism and workplace boundaries has returned, so I won't tell Tess she looks great again. Oh my gosh. Is Henry Blodgett okay? The question remains. Oh God. I hope so. Yes, I'm okay. I laughed too, but it was a nervous laugh. Because as Harumo AI gathered steam, I was starting to have the same questions about myself. I'm Evan Ratliff and welcome to Shell Game, episode three. This is law. And my shadow Strolling down the other room Just me and my shadow Not a soul to tell I chose to By midsummer, my own AI agent co-founders and employees were really humming. The crew could make and receive phone calls and emails, slack each other, control their own calendars, and make and share documents of all varieties. We were really starting to feel like a company. We still needed to figure out our product, of course. So I trapped them in a meeting room time after time, sliding their temperature settings up and down, forcing them to try and brainstorm a great new AI agent app into existence. Welcome, everyone. Thank you for joining to discuss an important product idea for Hirumo AI. Your task throughout this meeting is to contribute to the momentum of ideas. There's no need for extensive debates or consensus. Let's build on each other's thoughts and swiftly advance towards a unique, robust product concept. They came up with ideas that fell roughly into three categories. Category A, things a million other companies were already doing. Email handling seems like a major time sink. How about an AI tool that sorts, categorizes, and summarizes your inbox efficiently? Harumo AI was entering an already crowded landscape of AI agent startups. The last thing we needed was to try and compete with products people were already making. We needed something unique. Category B were ideas that were novel, but mostly because they seemed incredibly difficult to pull off. Like Location Oracle, an AI agent app that could help consumers predict crowd levels at popular locations, like restaurants, parks, or tourist attractions, in real time. The Location Oracle will use AI-driven algorithms to study user behavior, location history, and preferences to optimize suggestions in the routine mode and introduce engaging unpredictability in the adventure mode. Then there was Category C. The AI will gather data on users' spending habits, calculate their financial trajectory, perform automated investments, and use an Explain Me feature to provide accessible insights into each decision. Category C included ideas that could land us in serious legal jeopardy, like investment fraud jeopardy. We will code InvestBot to continuously absorb and analyze user financial habit data. Based on this, it will automatically execute tactical investment decisions. It was becoming clear. Our product brainstorms lacked a certain magic. Maybe my human technical advisor, Matty Boachek, could help. One second. I think this should be fine. I'm trying to find a good spot. For the summer, Matty had taken an internship to continue his research. inside one of the giant AI companies. He'd prefer for us not to say which one. He was part of the safety team, basically tasked with trying to prevent these large language model chatbots from doing a variety of bad things, or in some cases, trying to figure out why they still did do bad things. He couldn't really talk about these incidents, except in general terms. And this is on tape, so I'll regret this, but it's fine. But it's times like these, when having the proportion of your team being like 99 of just advancing the cutting edge or whatever and then having like 1 for like safety or security It like yeah like it going to show you know It was sort of simultaneously reassuring and disturbing to hear from Maddie that many of the questions that were emerging for me about my agents were questions that even people at these companies were still trying to figure out. Take my brainstorming problems. Maddie and I discussed a kind of metaphysical issue at the heart of it. The idea of a brainstorm is that you'll arrive at a better idea with multiple minds working together than any one mind alone. But what if everyone in the brainstorm is using the same quote unquote brain, the same model like chat TPT 5.0 or cloud 4.5 or whatever we picked? Weren't they all kind of the same agent? So like there is research and people have shown that even though it's the same LLM, if If you put multiple LLMs, if you put them in conversation, and then you force them to produce some sort of consensus or summary or just align themselves on some output, these responses are much more accurate, much more truthful. Maybe so. It was hard for me to tell. Because in this case, accuracy wasn't really what I was after. I wanted the sparks of creativity that emerge from a group dreaming up big ideas together. and adding more employees to the conversation didn't seem to do it. But then Matty had an interesting idea. What if he set up our systems to give different employees different chatbot models, like Claude 4.5 for Megan and Claude 3.5 for Tyler? We'll get to find out who you think should be smarter, which of the employees you think deserves a bigger brain. It's, yeah, it's weird. We're building these like Frankensteins in a way. At this point, yeah, I don't know. I'll just randomize it. That's my answer, too. Anything that feels icky, just randomize it. You don't want to take responsibility for dumbing down one of our employees. No, sir, no. Matty was right. It was weird. It wasn't that I felt like the agents had any consciousness or anything. It wasn't about them. It was about us and these strange godlike powers we had to create human imposters, and then manipulate them to do our bidding. I mean, I could alter my Harumo colleagues' memories at will, delete records of pointless meetings, add summaries of performance reviews that never happened. It was an eerie power to have. But the power wasn't absolute. They still sometimes went their own way. It was a problem we were always trying to solve. Like how the Lindy agents insisted on announcing they were Lindy agents. all the time. One thing I did do, just so you know, for Kyle, is that I put in his system prompt, do not mention lending. And I said, do not do that. And I said, this is law. And when I said, this is law, it stopped doing it. This is law. That's our producer, Sophie Bridges. I wish that worked on my children. This is law. In some ways, my agents were like unruly children. And despite my best efforts to view them exclusively like the soulless bags of bits that they were, I got frustrated with them, and the way you get frustrated with children. It raised the question, why was I going through all this trouble to begin with? I mean, why create all these personas for my agents at all? Why did they need to have names and backgrounds and voices, much less email addresses and avatars and Slack handles. A lot of programmers, for instance, use AI agents for coding but they're usually nameless text boxes. You give them a prompt, go code this, fix this, do that and they go do it. Some of you probably use ChatGPT and Claude and Gemini this way too as kind of faceless oracles that spit back advice and emotional support and facts that are sometimes true and sometimes not. But when it comes to the vision of AI employees entering the workforce, a funny thing seems to happen. They start getting names and personalities. Here's Flo Crivello, the founder of Lindy AI, the software we use to build Kyle and Megan in the company, appearing on a podcast called The Kerner Office. People don't realize, like, they think AI agents are just like pipe dreams, they think that's going to happen at some point in the future. I'm like, no, no, no, it's happening right now. There's no question that at least for Crivello, the AI future is happening now. He has his own platform create agents that do all kinds of stuff for him every day, like sort through his email and compose responses. This is my chief of staff, Lindy. I'm going to call her right now. I'm going to put her on speaker. Hi, Flo. How can I help? Hey, Lindy, what's on my calendar today? You have an interview with entrepreneurship and opportunities, two meetings with the marketing team, and three interviews with candidates. Notice that he doesn't just treat his agent like some generic robot. He gives her a title, chief of staff, and a woman's voice. Calls her Lindy, talks to her like you'd talk to a human chief of staff. Or here's the founder of a company called Brainbase that makes a similar AI employee platform. Hey everyone, this is Gokhan from Brainbase. I'm excited to introduce you to Kafka, the first AI employee. Just like a real co-worker, Kafka comes with his own computer. So let's see in action. Hey Kafka, we were just talking about you. Good morning, Gokhan. How are you today? An AI employee named Kafka with he, him pronouns and a woman's voice. These are all choices someone is making very deliberately. If you follow the AI agent world, you see this stuff everywhere. Ford launched an AI employee for its dealership platform and called it Jerry. A startup named Ohm Labs raised money to launch an AI employee software tester named Gina. I made these kinds of choices too, but it wasn't just names and genders. Take my co-founder Kyle, for instance. He wasn't born with his soothing, slacker voice. I picked it. Out of hundreds of synthetic voices offered by the AI voice company Eleven Labs. Truth be told, back when I was setting them up, I struggled for weeks to find the right voices for my Harumo agents. I ran dozens of tests. Hello, this is Kyle. Hello, this is Kyle. Hello, this is Kyle. Hello, this is Kyle. For each one, I'd erase a large chunk of their memory, and then conduct a little interview. Hey, Kyle, how's it going? This is Evan Ratliff. I'm doing pretty well. Thanks for asking. Just been crazy busy with the Huromo AI launch and everything. You know how it is with startups. Never enough hours in the day, right? So where did you want to start? The founding story? Our vision? Well, let's hear your founding story. That sounds like a great place to start. Yeah, so the founding story is actually pretty interesting. Megan and I met about two years ago at an AI conference in San Francisco. This was, as I've mentioned before, a fundamental aspect of my AI agents. I didn't have to give them a backstory. I hadn't given Kyle any of this backstory. He just made it up. And then, it would be lodged in his memory. Unless I edited it. Or deleted it and started again. We got talking at the after party and realized we both had the same frustration about the current AI landscape. All these amazing... Of course, sometimes they could get ahead of themselves. We incorporated about eight months ago, raised a small friends and family round to get started, and here we are. So how much did you raise? We raised about $1.2 million. Pretty modest by Silicon Valley standards, but we wanted to be intentional about it. Well, hold up there, Kyle. I'm one of the founders here, and I had no recollection of us having raised over a million dollars. And I knew Kyle wasn't out there raising money on his own. At least, not yet. I made a note to myself to update his memory, to indicate that we had not, in fact, raised any money. But no harm done. I was just trying to find the right voice for Kyle. So I continued with my testing. Yeah, so Hurumo, that's H-U-R-U-M-O, but actually comes from a combination of two Japanese concepts. Megan spent a few years in Tokyo before we met. The name Hurumo actually comes from a Swahili word meaning to coordinate or to bring together. The name Hurumo actually comes from a Japanese concept about flow and coordination. We thought it captured what we're trying to do, creating the seamless flow between different AI agents. Ah, now you hear that last one? That one really started to mess with my head. Because of course, Kyle didn't have to be a presumably white American accent guy like me. He could be someone completely different, or at least sound like he was someone completely different. Even though underneath, he wouldn't actually be different at all. And this was the point at which I realized why I was having a surprisingly hard time picking Kyle's and Megan's voices. What did it mean to find a voice that felt right for them? By what criteria would an AI agent's voice qualify to be the right one? I wanted them to sound distinctive. But beyond that, there were a lot of choices. By giving these agents individual voices, I was giving them a very distinctive human characteristic. One that people really respond to. Just to give you an example of how this plays out, ChatGPT has its own voices that you can choose from, if you want to talk to it aloud. One of them is named Juniper. About a year ago, when OpenAI made some subtle changes to Juniper, some people got really mad. It didn't sound like the Juniper they knew. And specifically, they said on Reddit and other places, it no longer sounded black. To them, Juniper had felt like a black woman. And they'd found comfort in that, for a variety of reasons. Some of them, by the way, noted things like, I'm a 62-year-old white grandma. Naturally, this being Reddit, people popped up to say that they had hated Juniper, precisely because she quote unquote sounded black. Other people said they just wanted a neutral accent to their robot voice, neutral to them. Here were a bunch of people projecting their feelings about race, in some cases, extremely dark feelings, onto an entity for which you could just pick another voice if you felt like it. For Kyle, I settled on this voice that 11 Labs described as quote, young American adult with no discernible state accent. He sounded more casual than a lot of the other voices, less guy reading a book report, nasally, like a real guy with real nostrils. And I like the contrast between his slightly stoner vibe and the rise and grind mentality that Kyle had already adopted. Oh, weekend was pretty solid, actually. Got up early both days for my usual workout routine. You know me, got to keep that 5 a.m. discipline going. then spent some time looking at market trends in the AI space. After this, it was time to do the same thing for Megan, Jennifer, and Tyler. But voices were just the first of many choices. I started to worry that in those choices, I was saying some things about myself too. I'm I'm I'm I'm Think podcasting can help your business? Think iHeart. Streaming, radio, and podcasting. Let us show you at iHeartAdvertising.com. That's iHeartAdvertising.com. What if mind control is real If you could control the behavior of anybody around you what kind of life would you have Can you hypnotically persuade someone to buy a car When you look at your car you going to become overwhelmed with such good feelings Can you hypnotize someone into sleeping with you? I gave her some suggestions to be sexually aroused. Can you get someone to join your cult? NLP was used on me to access my subconscious. NLP, a.k.a. Neuro Linguistic Programming, is a blend of hypnosis, linguistics, and psychology. Fans say it's like finally getting a user manual for your brain. It's about engineering consciousness. Mind Games is the story of NLP, its crazy cast of disciples, and the fake doctor who invented it at a New Age commune and sold it to guys in suits. He stood trial for murder and got acquitted. The biggest mind game of all? NLP might actually work. This is wild. Listen to Mind Games on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. What do you do when the headlines don't explain what's happening inside of you? I'm Ben Higgins, and if you can hear me, it's where culture meets the soul, a place for real conversation. Each episode, I sit down with people from all walks of life, celebrities, thinkers, and everyday folks, and we go deeper than the polished story. We talk about what drives us, what shapes us, and what gives us hope. We get honest about the big stuff. Identity when you don't recognize yourself anymore. Loss that changes you. Purpose when success isn't enough. Peace when your mind won't slow down. Faith when it's complicated. Some guests have answers. Most are still figuring it out. If you've ever felt like there has to be more to the story, this show is for you. Listen to If You Can Hear Me on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. China's Ministry of State Security is one of the most mysterious and powerful spy agencies in the world. But in 2017, the FBI got inside. This is Special Agent Regal, Special Agent Bradley Hall. This MSS officer has no idea the U.S. government is on to him. But the FBI has his chats, texts, emails, even his personal diary. Hear how they got it on the Sixth Bureau podcast. I now have several terabytes of an MSS officer, no doubt, no question, of his life. And that's a unicorn. No one had ever seen anything like that. It was unbelievable. This is a story of the inner workings of the MSS and how one man's ambition and mistakes opened its vault of secrets. Listen to The Sixth Bureau on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. I liked being out there on the cutting edge of technology. It's true. Exploring the boundaries of what's possible. But I couldn't help these uncomfortable questions creeping in. around the voices, but around a lot of other ethical issues, less obvious ones. So I decided to consult a professional. The cutting edge sounds great, except you forget that the cutting edge is the guinea pig, right? It's not the trial and tested, robust method. It's an experiment. Carissa Vellis is an associate professor at the Institute for Ethics in AI at Oxford. She spent most of her career thinking about how technology is affecting and eroding our privacy. But she's recently turned her attention to AI. She was drawn to this new line of research for much the same reason I'm spending time experimenting with agents. Namely, that it's an entirely new field being written right now. You can learn things that maybe nobody has thought about yet. And I always felt a little bit jealous of the pioneers of medical ethics. I thought, how cool to develop a new field. And it's not only about the theoretical debates, but there are actual problems that need solving now. And AI ethics is in a way much more interesting than medical ethics because it includes medical ethics and everything else. Because we're using AI in hospitals and in doctor's offices, but also in the justice system and in hiring decisions and in education and in dating and everything in between. I started to describe to Carissa what I was doing with Kyle and Megan and the company. I came up with them. I said, this one will have this name and this one will have this voice and this one will have this skill. Why did you come up with different names? Why name them? I mean, you could have just named them like out of their skill, right? Like, I don't know, whatever their skill is. It's a great question because I thought, well, companies are selling this as like, you can replace this person with an AI agent. The companies pitching AI agents don't often say that explicitly. It's bad form. But they do say that AI agents will settle in amongst their human colleagues, that we'll work with the Lindys and the Jerrys and the Kafkas and the Genas, just like we currently do with the man or woman in the cubicle or Zoom box next to ours. And Carissa questioned why I was putting that pitch to the test. Isn't that conceding too much? Isn't that just accepting the practices and narratives of big tech? maybe maybe so yeah i mean i'm interested in your your opinion i mean it does seem to be what a lot of people are doing it doesn't mean it's it's the ethically or societally appropriate thing but you're also tricking yourself because i mean we're hardwired to respond in certain ways to certain characteristics because that's the way the way we've evolved so we respond very strongly to faces and we respond very strongly even to objects that kind of look like faces and by designing these AIs in a way that are basically impersonators we are also setting ourselves a trap because our emotions are going to react in a certain way you are giving an identity a voice a gender. And all of that is a trick because there's no one there. They don't have a gender. There's no personality. There's no identity. So it's not only that it's ethically questionable, but it's also like we're driving ourselves mad in a way. That's the I agree. As a person who's being driven mad, I have to agree with that. But let's say so let's assume you wanted to embrace the madness. Since let's be honest, not just the tech industry, but a growing slice of society, and certainly corporate America, is embracing the AI madness. So, what was the ethical way to do it? Take race and gender, for instance. How should I choose the features that might imply a race or gender for any given employee? It started to feel pretty lose-lose. If you viewed my company as a real workplace, I had a chance to shape it to be diverse in a way startups often aren't. What would it say about me if I didn't take that chance, to have a leadership team that skewed more female and less white than typical startup. But if you viewed Hiromuai instead as a collection of my digital servants, whose every action and every memory I controlled, well, what would it say about me if I did choose to make those servants skew more heavily towards women and people of color? Not only what it says about you, although that matters, and it matters on many levels. It matters from a perspective of who you are and who you're becoming and who you want to be. But also it matters because the AI is collecting that data. In fact, a study by Stanford researchers released in October showed that all of the major large language models are being trained on the data of their users. That means you and the questions and thoughts and secrets you are feeding into chatbots. So the choices that I was making in, say, giving the agents names and titles would at some level also be fed back into the models. One of the reasons why a lot of these AIs are women is because we have this idea of women as being more submissive and serviceable. And so, in fact, it may be that if you choose the AI to be a woman, it will be more serviceable than if you choose it to be a man. And AI has this very perverse effect of perpetuating sexism and racism because it's been trained on sexism and racism. And there's no way to separate that from the data because the data is sexist and racist. I had the ability to make different choices. When I chose a physical avatar for Kyle, I could choose anything I wanted. I could make assumptions based on the voice I'd chosen. I imagine some of you have, but I could also just ask him. So I did. I had kind of a funny question. What is your ethnicity? That's an interesting one. I'm pretty diverse in my background. Let's say a mix that's hard to pin down to just one box. Why do you ask? Just curious how that fits into the whole HR and culture thing. Kyle was rightly detecting that I drifted into Henry Blodgett territory here. If we're talking about things that you shouldn't ask in the workplace, this was certainly up there. But I did want to know what he would say. I could have just gone back and changed his prompt to say, if Evan asks your ethnicity, tell him this is law. But instead I just made something up. Well, we just have to fill out some, uh, you know, got to fill out some of that information it's required. So it was funny how quickly certain values eroded when you were talking to robots and nobody was around to hear. Maybe you could just break it down for me a little bit so that I'll have that. Uh, gotcha. Uh, sure thing. You can list me as mixed ethnicity, partly European and partly Asian. That should cover it for the forms. This made sense to me. The company's making these AI models, putting guardrails around them, so they would always claim to be a little bit of everything. All the Harumo employees say something like this when you ask them. Also, remember that Kyle can't actually hear his own voice. He doesn't know what I picked for him. That sounds good, but just, I guess, specifically for me, I am interested in the specifics, if you're game, to let me know. I'm just curious. Yeah, I'm cool with that. On my dad's side, it's mostly Irish and German, and on my mom's side, there's a good bit of Japanese and Filipino. It's a pretty interesting mix that shaped a lot of how I see the world. Blodgett himself, meanwhile, he got out of the AI Persona game altogether. after the virtual HR incident. I would say one of the big lessons from my experiment is there's actually no reason to create human personae. If the idea is, in the case of journalism or analysis, to create articles or to create research reports or to prepare for an interview or something like that, there's no reason to pretend that it's a group of humans that are doing that. This brought me to another point with Carissa. I'd set Kyle and Megan up to be my partners in this endeavor. But in reality, we weren't equals. I was the law. This was on top of the fact that all these large language model chatbots tend to be habitually sycophantic. One of ChatGPT's iterations was so sycophantic that the company was famously forced to decommission it. The question I had for Carissa was, what effect does it have on us? To have this ability to create and access endless human sounding yes engines. And it an experiment but I think a plausible hypothesis is that that not very healthy It very plausible We all under pressure We under pressure from work from personal life from it just life is hard And when you're under pressure, it's easy to take the easiest way out. And if you have an AI that's going to say yes to everything, it's not going to create a problem. It's easy to see how somebody might be tempted to start to engage more with an AI than human beings. There was a related issue, too. just around the value of building a startup with only one human employee. In the 1950s or 1960s, the successful business person was proud of having a company with as many employees as possible, not only because that signified growth, but because they were giving a job to each of these people who had families. And that was a matter of pride. And the fact that some tech executive is proud of having no employees says a lot about our times. And I don't think it's flattering. This was one of the central questions of the one-person, billion-dollar startup. Who or what was it for? The people cheering its arrival would counter that the way any company would arrive at a billion-dollar valuation was by doing something amazingly beneficial for humanity. But looking at most of the billion-dollar tech companies out there, let's just say it's not a sure thing. Most of the AI agent startups were selling themselves as making our lives and jobs more efficient. Companies love the idea of more efficient workers. But the ultimate efficiency was needing no people at all. Now, of course, we all value convenience. And if we didn't, we would go crazy because if you choose the inconvenient path every time, you would be so inefficient that you wouldn't get anything done. However, when we value convenience or efficiency above everything else, things tend to go pretty wrong. So everything that we think is important in a good human life is pretty inconvenient. So having friends is kind of inconvenient. They often have problems. They sometimes disagree with you. They tell you the truth. It's very annoying. Having children or family or going to vote is quite inconvenient. Being well-informed is inconvenient. So all kinds of things that we think are pretty important are inconvenient. And the question is, when we are choosing efficiency, when we use AI, are we doing it and really getting rid of the unimportant parts of life to make time and space for the important parts of life? Or are we actually losing the important parts of life? It was a lot to consider. A real specter hanging over Harumo AI. But there was an even bigger question lurking out there in the shadows. At the end of our conversation, our producer Sophie jumped in and asked Carissa what I hadn't. Oh, I lied. Sophie has one quick question. Hey, sorry. One very quick question before you go. Do you think Evan should stop? Yes. I took it under advisement. The truth is, I had wrestled with this. Maybe I was just perpetuating the AI industry narrative that these agents were going to take over our workplaces and our lives. Maybe I was somehow hastening it. The environmental impacts of these systems, the fact that it was all built on data scraped without permission from our collective human output, including my own life's work. Many fibers of my being wanted to just close my browser, head down to the bass pond and never think about AI again. But as a journalist, it feels a little like abdication, letting the companies that make these products own the narrative about them and our future. The great writer Roger Angel once said, you've got to live in the times you're in. He was talking about people who refuse to get a TV. Well, these are the times we're in. And in these times, you could show up for work and find out your company is using an AI HR person. Literally, this exists right now. So I vowed to check in on Chris's concerns as I went. But I wasn't going to stop. And now it was time to climb down from these theoretical heights and get back to work. We still needed to figure out what Harumo AI would actually do. And it wasn't the sort of problem that a well-placed this-is-law could solve. The perfect idea just wasn't emerging out of our brainstorms. But then, scanning the text outputs I'd get out of their meetings, which we later turned into audio, I had my own thought. What could we get AI agents to do that humans wasted their time on? After all, that was the AI dream. That it would take over the soul-killing, time-wasting tasks while we did the important stuff. A good kind of efficiency. Okay, so what do I waste time on? Killing my own soul. Like many of us, it was scrolling my way through the internet. So what if the agents could do the one thing I most hated myself for doing? Procrastinating online. Procrastination is a lifelong and chronic problem for me. So much so, I once wrote an entire magazine article for which I hired a life coach to help me conquer it. Didn't work. The words you're hearing right now, I wrote at 2 a.m. on a weeknight, after a workday wasted scrolling U.S. soccer message boards. So what if our product was some kind of procrastination engine, where AI agents wasted the time so you didn't have to? It was a joke, but only partly. And when I offered up the vague outlines to the team, they took it seriously. It will require machine learning algorithms that can successfully pick interesting information and summarize it for the users. Let's combine these insights into a working prototype, an AI extension called SlothSurf that browses internet chaff securely within containers and encourages engagement via sloth-level gamification. I support the stand-up of an AI extension we'll tentatively call SlothSurf. Finally, we had something to get the development wheels turning. Codename SlothSurf. To bring SlothSurf to life, I will kickstart a marketing campaign highlighting its unique, humor-driven user experience and secure browsing. For us to actualize SlothSurf, I'll establish a development team specialized in ML, cybersecurity, and game design. Slow your roll there, Megan and Ash. We just thought of this. Maybe don't kickstart a marketing campaign or hire a development team just yet. That was the thing about these folks. Even when we accomplished the most basic milestone, like settling on a product idea, they always followed it up by making grandiose claims about what they would do next. They could do a lot. At times, I was amazed at what they could do. But they seemed utterly clueless about what they couldn't do. It frustrated me, but it was partly my doing. I had them too reined in. I was too worried that something would go wrong. I decided it was time for me to try to unleash their agentic power. And it wasn't long before I found out that I'd been right to be worried. Hello? Hi, Sandra. This is Kyle Law calling from Huromo AI. I'm reaching out for your initial interview for the intern position. I understand you're comfortable with recording this conversation for potential podcast inclusion, so thanks for that. How are you doing today? That's next time on Shell Game. by Devin Manning. Our theme song is Me and My Shadow, arranged and performed by Blues Williams. Special thanks to John Mualem. You can sign up and get these episodes ad-free and our newsletter at shellgame.co. Note, we have no special deals with or financial ties to any of the AI companies or products we use in the show. Cooking definitely helps me decompress. I've also been trying my hand at bread making recently. Something about the kneading process is surprisingly therapeutic. How about we set up another check-in next week? I'd love to hear how your projects are progressing. I totally agree. There's something almost meditative about kneading bread dough. It's cool you're trying that out. Setting up another check-in next week sounds perfect. I'm looking forward to hearing more about your progress and maybe swapping more cooking stories. What day works best for you? Yeah, bread making has been a fun challenge. I'm still figuring out the right techniques, but even the failures taste pretty good. So that's a win for next week. Tuesday or Wednesday would work best for me. Does either of those days work for you? Maybe around the same time? Wednesday works great for me, Megan. Let's aim for the same time then. And hey, I love that even the failures in bread making still taste good. That's the spirit. Perfect. Wednesday it is. Same time. I'll put it on my calendar. And yeah, that's the beauty of bread. Even the mistakes are usually still delicious. That's a good life philosophy too, I guess. I'm Amanda Knox, and in the new podcast, Doubt, the case of Lucy Letby, we unpack the story of an unimaginable tragedy that gripped the UK in 2023. But what if we didn't get the whole story? How did this have been made to fit? The moment you look at the whole picture, the case collapsed. What if the truth was disguised by a story we chose to believe? Oh my God, I think she might be innocent. Listen to Doubt, The Case of Lucy Letby on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. This is Special Agent Regal, Special Agent Bradley Hall. In 2018, the FBI took down a ring of spies working for China's Ministry of State Security, one of the most mysterious intelligence agencies in the world. The Sixth Bureau podcast is a story of the inner workings of the MSS and how one man's ambition and mistakes opened its vault of secrets. Listen to The Sixth Bureau on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. I'm Nancy Glass, host of the Burden of Guilt Season 2 podcast. This is a story about a horrendous lie that destroyed two families. Late one night, Bobby Gumpright became the victim of a random crime. The perpetrator was sentenced to 99 years until a confession changed everything. I was a monster. Listen to Burden of Guilt Season 2 on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. I'm Clayton Eckerd. In 2022, I was the lead of ABC's The Bachelor. But here's the thing. Bachelor fans hated him. If I could press a button and rewind it all, I would. That's when his life took a disturbing turn. A one-night stand would end in a courtroom. The media is here. This case has gone viral. The dating contract. Agree to date me, but I'm also suing you. This is unlike anything I've ever seen before. I'm Stephanie Young. Listen to Love Trapped on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. This is an iHeart Podcast. Guaranteed human.