TED Radio Hour

How companies use AI to choose who gets hired and fired

50 min
Oct 3, 20257 months ago
Listen to Episode
Summary

Investigative journalist Hilka Schellman explores how AI is reshaping hiring, monitoring, and firing decisions across companies. The episode reveals that while AI promises efficiency and fairness, many tools lack scientific validation, perpetuate bias, and operate with little transparency or accountability.

Insights
  • AI hiring tools often lack scientific basis—facial analysis and emotion detection in video interviews have no proven correlation with job performance, yet are widely deployed
  • Algorithmic bias gets embedded and amplified at scale; Amazon's gender-biased algorithm affected millions of applicants, while human bias typically impacts individuals
  • Employee monitoring has expanded beyond warehouses to knowledge workers via keystroke tracking, sentiment analysis, and productivity metrics that incentivize theater over actual work
  • Companies knowingly use AI tools they know reject qualified candidates because the workload reduction justifies the false negatives
  • Applicants are 'forced consumers' of opaque technology with no recourse; transparency and validation standards are largely absent from the vendor ecosystem
Trends
Rise of one-way video interview screening with unvalidated AI analysis replacing human interactionExpansion of workplace surveillance from task-measurable roles (warehouses, drivers) to knowledge workers (lawyers, coders, medical professionals)Emergence of 'productivity theater'—employees gaming monitoring systems (mouse jigglers, fake activity) rather than improving actual outputFlight risk prediction algorithms influencing promotion and layoff decisions based on correlations, not causationSkills-based hiring replacing degree requirements, but AI still struggles to measure soft skills objectivelyBias laundering—historical biased hiring data gets encoded into algorithms, then presented as objective mathematical truthLack of regulatory oversight; vendors self-monitor their own tools with inherent conflicts of interestAI-generated resumes and avatar interviews creating AI-vs-AI hiring scenarios, disconnecting from actual job capability assessmentPost-hire surveillance data being repurposed for layoff decisions (e.g., key card swipes used against caregivers and people with disabilities)Growing awareness among job seekers of algorithmic gatekeeping, driving demand for direct company application and recruiter networking
Topics
AI bias in hiring algorithmsOne-way video interview screeningFacial analysis and emotion detection in recruitmentEmployee monitoring and keystroke trackingProductivity metrics and performance measurementFlight risk prediction algorithmsAlgorithmic decision-making in layoffs and firingResume optimization for AI screeningSkills-based hiring vs. credential-based hiringWorkplace surveillance and privacyValidation and testing of AI hiring toolsTransparency and accountability in AI systemsBias in performance review dataJob applicant rights and recourseHuman vs. algorithmic hiring effectiveness
Companies
Amazon
Built gender-biased ML algorithm to rate applicants; penalized resumes with 'women' keyword; publicly shelved tool af...
Google
Receives over 3 million job applications annually, driving need for automated screening tools
IBM
Receives over 5 million job applications annually, exemplifying volume problem that drives AI hiring adoption
Goldman Sachs
Received over 100,000 applicants for summer internship program alone, illustrating scale of hiring challenges
HireView
One-way video interview platform using facial analysis and emotion detection to predict job success
LinkedIn
Job platform that uses AI to rank candidates and recommend opportunities based on engagement signals
Monster
Early job platform (early 2000s) that democratized job applications and created volume problem
ZipRecruiter
Job platform CEO discussed how AI loses information when ingesting resumes due to formatting issues
Microsoft
Found that employees spend approximately one hour daily on 'productivity theater' rather than actual work
People
Hilka Schellman
Author of 'The Algorithm'; conducted extensive investigation into AI use in hiring, monitoring, and firing
Manush Zamorodi
Host of TED Radio Hour episode interviewing Hilka Schellman about AI in the workplace
Lizzie
Case study subject; laid off after scoring 0-33 on one-way video interview; sued company and reached settlement
Emily Smith
Interviewed subject; experienced anxiety from real-time tracking of file review times and break duration
Quotes
"An algorithm that is used across all of the resumes that are coming in into a company could like just multiply the harms."
Hilka SchellmanEarly in episode
"There's no science here. How could a smile, when you answer the question, what are your strengths and weaknesses, be predictive that you can do the job?"
Hilka Schellman (quoting psychologist)Mid-episode
"I clearly fooled the algorithm, and I didn't even fool it in a sophisticated way."
Hilka SchellmanDiscussing 'I love teamwork' experiment
"We know that almost 90% of the respondents said that they know that the AI tool rejects qualified candidates. So they know this doesn't work. But they still use it."
Hilka SchellmanLate in episode
"Job applicants are sort of forced consumers of the technology, right? Because if you want the job, you're not going to say no."
Hilka SchellmanDiscussing applicant agency
Full Transcript
Support for NPR and the following message come from the Kaufman Foundation, providing access to opportunities that help people achieve financial stability, upward mobility, and economic prosperity, regardless of race, gender, or geography. Kaufman.org. Hey, it's Manush. October 1st, whether you knew it or not, was a historic day. It was the first time in over half a century that NPR and stations like this one operated without federal support. It feels very uncertain, but here's what we do know over here at NPR, that public radio endures, public media endures. It is independent, it is resilient, it is people-powered. Whatever the moment, you're still going to find us here telling stories that matter. Thank you for being here with us. This is the TED Radio Hour. Each week, groundbreaking TED Talks, delivered at TED conferences, to bring about the future we want to see around the world, to understand who we are. From those talks, we bring you speakers and ideas that will surprise you. You just don't know what you're going to find. Challenge you. We truly have to ask ourselves, like, why is it noteworthy? And even change you. I literally feel like I'm a different person. Yes. Do you feel that way? Ideas worth spreading. From TED and NPR, I'm Manush Zamorodi. When was the last time you applied for a job? Well, whether it was last week or several decades ago, the process has, and is, going through a radical transformation. Back about 25 years or so ago, people still looked through the classifieds. They mailed or emailed in their resume. But then, as the internet took off, people started using job platforms. Yeah. I mean, I think that is sort of the beginning stages, right? With the dawn of job platforms, like the LinkedIn's, Monster, Zippa Couter, they all started around the early odds, the early 2000s. And with that, applying became so much easier. This is author and journalist Hilka Schellman. Now you could just go online. You know, an algorithm will show you all the jobs that you are probably qualified for, and you can, like, apply with one click. So it's great. It's like a total democratization of, like, applications. More people had more access to more jobs. More employers had access to better qualified candidates. Online job boards were good for everyone. But as the years went by, they became too much of a good thing. It got so easy to apply for a job that everyone started applying to everything. So we see a lot of companies and, you know, applicants complain about this. They close their application after 24 hours because they already got hundreds and thousands of resumes. You know, Google said they get over 3 million applications a year. IBM gets over 5 million applications. I mean, the volume is just ginormous. It's really hard. I mean, I sort of understand that hiring managers are overwhelmed, and they want a solution. To get through all those thousands, millions of applications, companies turned to software that weeded out the least qualified people immediately. But even that had limitations. I think Goldman Sachs a couple years ago had over 100,000 applicants for their summer internship program alone. And what recruiters and hiring managers have told me, they're like, look, these people, look great. They all have these beautiful bachelor's degrees and master's degree, but often they don't have a sort of long work history. It's really hard to tell. Like, what are they really good at? You know, would they be good at this company? So about 10 or so years ago, big companies started using AI. The hope was to screen applicants for more than just their qualifications. Amazon, for example, built an algorithm to rate applicants, kind of like how shoppers rate products. They had built a machine learning algorithm to find out, like, the incoming resumes, who are going to be the people that are going to be successful in this branch of Amazon. To do this, Amazon fed all the resumes from qualified applicants from over the previous decade and asked the machine to find patterns. Trouble was, those resumes mostly came from men. And so the algorithm assumed that men would be best suited for the job. And that tool started to penalize people who had the word woman or women on their resume. You could be an amazing software engineer, but if you had, like, I don't know, college or your hobby is like being part of the women's chess club or their women's softball league, you would get penalized for that. Amazon tried fixing the algorithm's gender bias problem, but eventually decided the tool couldn't be trusted and ended up shelving it. Unlike many companies, Amazon publicly admitted what had happened. It's one of the rare examples of how AI is being used to hire people that we actually know about, Hilka says. Since then, a lot of companies don't want to speak out because I think they fear liability, right? Because if you say, like, hey, we had a tool that disadvantaged women, imagine you're ahead of a large company and you have hundreds of thousand people that applied and suddenly possibly all of the women that applied would have a case in a class action lawsuit against you because you de facto criminalize them, right? Like the scope of these tools are just so vast. And so the harm could be much worse than maybe one human hiring manager who discriminates, which sucks, no question. But an algorithm that is used across all of the resumes that are coming in into a company could like just multiply the harms. And it's really hard for applicants, right? Because we know that it's sort of secretive. It's just the way the system is built. It's sort of like a little bit of a cloak of silence here. That cloak of silence now extends to how AI is being used to hire people, track them on the job, and even decide who gets fired. Companies say these systems promise speed and fairness. But for workers, the experience can feel like being managed by a black box. What do we know? And what should we know? Today on the show, how AI is reshaping hiring and firing, where it helps, where it harms, and what it means for the future of work. We're talking to Hilka Schellman, the author of The Algorithm, a book that chronicles her investigations into AI in the workplace. Hilka remembers the first time she saw a presentation where tech companies were promising to help overwhelmed HR execs with tools like facial analysis. This was back in 2018. It was like the dawn of the AI era. It was very early on. And I walked into this conference room and I saw this demo up on the screen, right? This just had to upload a video of themselves talking about their qualifications and AI would do the rest. And they were saying, well, in this one-way video interview, we're going to use the words that people use, the intonation of people's voices, and their emotion on their faces. Our computer vision can recognize that. And with all these three attributes, we can predict how successful you'll be in the job. You see the graphics stuff on the face. And it was just quite striking. Just by the way that you express yourself, we have the technology to be able to tell that you will be a good match with this job. Yeah. And I was blown away. I was like, wow, who knew that our facial expressions in a job interview are predictive of success in the job? Huh, who knew? This is totally new science. I had never heard of it. And I thought, well, we're going to break through. Maybe this is better hiring, right? Because we know that humans are totally biased in hiring. We can't get the bias out of humans. As soon as I know you went to the same college I did or to the same school, I see you in a different light than just looking at your skills and capabilities, right? In early 2018, I think, like me, a lot of companies were just blown away by this. And then I started to verify this because I was like, how cool this facial expression is. I wonder what else we can predict. Let's look at the science. And I talked to a psychologist who have studied emotions, facial expressions, computer vision for decades. And computer vision is just a few years old, maybe a decade or so. And they were like, there's no science here. What are you talking about? How could a smile, when you answer the question, what do you strengthen weaknesses? And you have the same smile as like five other people who did the job interview before that are now successful in the job. How can that be predictive that you can do the job? I was like, oh, that's a good question. I just thought, they're like, no, this is just like correlation. To them, it was not meaningful at all. And they thought this was like pure rubbish and could actually cause bias and discrimination. So the problem is technology has created this problem. And therefore we need more technology to solve the problem, which is where the AI comes in. Yes. You know, it seems sometimes that we believe technology will always solve our problems. So I took this deep dive and went down the rabbit hole into AI in the workplace. And I uncovered that it's pretty ubiquitous actually already if we realize it or not. I found different machine learning and AI tools all throughout what folks call the employee life cycle. We see this, we see a lot of AI tools in hiring. We see it for one way video interviews and hiring where you have no one on the other side and you just kind of talk to yourself. We see games, assessments that folks need to play that use AI. We see it for AI background checks and hiring. And then we also see it in the workplace everywhere. We see like keyword logging software that checks like everything that you type on your computer. We see analysis of Zoom meetings and you can tell if somebody is a bully, who spoke more, who spoke less. We find companies checking for sentiment of their employees and anything that is written on a work computer. We see analysis of people's emotions, all kinds of things. And that could maybe lead to somebody getting rejected and has nothing to do with your skills, your capabilities, your qualifications, right? Things that we should look at when we judge candidates for a job. It has just everything to do with like maybe your gender, maybe your ethnicity. And that's a real problem because that's not what we want to do. I mean, but here's the thing. I have been on the other end and sat in on hiring interviews and I don't feel like humans are that much better at deciding. I have definitely thought, oh, this person, they seem great. Yeah. I don't know if they're going to actually do well at this job. So is this just a case of humans looking for certainty where maybe there kind of is none or they're thinking maybe there's some. Yeah, I think you put the finger right in the wound. It's like, so first of all, we know that humans are very biased in hiring, right? And we know from trying to train humans that it's just really, really hard to get out the bias. We're also just some people interview really well. Yeah, we call that the competence versus confidence. So a lot of people like they're very good at talking about how they would do the job and they exude confidence, right? And you feel like, oh, they must be competent the way they talk about the, you know, how they solve these problems and, and, you know, come to find out when they show up day one or like a couple of weeks in, you're like, wow, they just are very good about talking about how they do the job. They're actually not good at the job. And that happens all the time. But I love the idea that I could get a report that's like, actually, when she raised her eyebrow, that is a sign that she will not be able to leave these tasks as well as she could. Wouldn't that be amazing if like, but you're saying, no, a lot of promises are being made that actually have no basis in science. Exactly. Exactly. And we see this a whole lot with AI tools, right? So like, I'm not saying we should go back to human hiring, right? It's not a way where I said, like, hey, you know, all the technology is crap. But I found out and we should just go back to the traditional way of human hiring. That is actually not a good idea because we can't get the bias out of people. But unfortunately, often the people who built these tools don't know exactly what the tool predicts upon either. That strikes me as very problematic. In a minute, more with investigative journalist Hilka Schellman about the secretive AI tools being used to hire people. And then if they finally get the job, still more tools being used to monitor them while they work. I'm Manush Zamorodi and you're listening to NPR's Ted Radio Hour. We'll be right back. Support for NPR and the following message come from the Kauffman Foundation, providing access to opportunities that help people achieve financial stability, upward mobility and economic prosperity, regardless of race, gender or geography. Kauffman.org. This message comes from Ted Talks Daily, the podcast that brings you in new idea every day. Learn what's transforming humanity from balancing AI and your critical thinking to surpassing discoveries about the adolescent brain. Find Ted Talks Daily, wherever you listen. It's the Ted Radio Hour from NPR. I'm Manush Zamorodi. Today on the show, we're spending the hour with investigative journalist Hilka Schellman. She's the author of The Algorithm, which looks into how AI is being used in the workplace including how it's being used to hire and sometimes fire people. This is what she found out when she talked to a woman named Lizzie, who doesn't want her full name to be used for fear of retribution. What really stood out for me is like Lizzie's story. You know, before the pandemic, she started working as a makeup artist at a department store in the UK, you know, where like people walk in and you know, they're little booths for different makeup vendors and she would put makeup on people and try to sell them some of their makeup products. And Lizzie told me like she loved the work, like she loved being a makeup artist and helping people feel good about themselves. But the pandemic hit, stores closed, shoppers stayed home. Lizzie and her employees were told that layoffs were coming and they would need to re-interview for their jobs. And the company asked Lizzie and all the other makeup artists to do a one way video interview. Lizzie needed to sit in front of a computer or her smartphone and record herself alone, answering some questions. Like maybe, why do you want to work here? Her answers would be analyzed by computers and rated. Lizzie told Hilka that she didn't enjoy these one way interviews, but she wasn't worried because she done one before. That was how she first got hired. Like she just thought like it's just weird. But she's done it before. She was fine before. So why wouldn't she be fine now? When Lizzie finished the interview, she had no way of knowing how she scored because she didn't get any feedback. So she just had to wait and wait. So then Lizzie meets with like a manager of the company and they told her that unfortunately she's one of the people who is going to let go. She asked the manager like, wait, wait, why? And they said it's because you scored so low on your hire view, the one way video interview. And to be honest, she told me she was shattered. She was like upset and like this was she loved the job and she didn't know what to do, where to turn. And I think she also, you know, she took it too hard. She felt like, wait, am I not good at this? Like what is happening? Lizzie joined forces with two other makeup artists who'd been laid off as well. Together, they sued the company. She ended up finding out that her score on her one way interview was between zero to 33 points, the lowest range possible. Was that technical error? She never found out. Even when Lizzie appealed her score, she was still denied and the layoffs still stood. And at the end, the three makeup artists achieved a settlement. Lizzie's experience made Hilka curious. How easily did the tech malfunction? She decided to see how these interview tools would respond if she messed with them a little bit. I wonder like what would happen if I would I call like poking the algorithm, right? Like like checking it a little bit, like see like how will you react? You know, needling it a little bit like, OK, how would you react if I do this, if I do that? I thought it was like, well, if I say just one sentence the whole time, surely I would get horrible score, right? Like if I answer all the questions with like, I love teamwork or something, no one in their right mind would hire me. And so that's what Hilka did. For every question she answered, I love teamwork. I love teamwork. I love teamwork. I love teamwork. I love teamwork. So I love teamwork. If the tool asked, what makes you uniquely qualified for this role? I love teamwork. What past relevant experience do you have? I love teamwork. Where do you see yourself in five years? I love teamwork. I said it in different like intonations and I thought for sure I would get like, you know, like an error message or like a 0.001 percent score, right? But I was surprised. I got pretty good scores. And then I think that that was surprising to me that I was like, wow, I clearly fooled the algorithm, right? And I didn't even fool it in a sophisticated way. In another experiment, Hilka also tried a one way interview screener. And she answered all the questions in German. And I'm sure it's like going to be so crazy and the tool is going to hate this and just give me an error message, right? Because clearly I cannot reach any threshold because I'm speaking in German. Like the tool is expecting an answer in English, right? Because it asked me in English. The results of this Uber serious test, I was just surprised when I got a result and I was seventy three percent qualified for the job. And I was like, what the? I mean, I this makes no sense. There's nothing to do with the job. I just said, you know, basically just make sounds for the tool, right? Like they're not even English. So I don't actually know what's going on under the hood. And that really worries me because clearly these tools like can't actually distinguish if somebody is a qualified applicant or not. Well, first of all, I am so glad I'm not a young person applying for a job right now. It's pretty it's pretty wild out there. And I think what happens a lot, you have to think about like talent acquisition, HR, those are cost centers for companies, right? They don't generate money. It's not the sales team. It's not the product team. So wherever I can save money on that team and have fewer headcount, you know, that usually translates into fewer headcount. That's a that's a good thing for most companies. So suddenly there are these like tools that they come with like democratizing hiring, we're going to screen everyone. And this is like a fair process without bias. And it's only going to cost you a few hundred thousand dollars instead of hiring, you know, having like, I don't know how many people on your team who would like do this kind of work. It's like it's like it's like so enticing to companies. And a lot of the oversight is done by the AI vendors themselves. But they are the one that built a technology. You sort of maybe get the conflict of interest here, right? That people who built their own AI tools, then monitor them, may have a conflict of interest when they realize there's a problem here that they're not going to act on that. Let's say you get through the hiring process, you get the job. Woo. But you are saying that AI is now being used through the entire life cycle of an employee. So AI might help you get hired. And then what happens when you're actually on the job? You are being monitored potentially by AI. Yeah, potentially. I mean, I'm going to say it's probably likely that you're going to get watched. The New York Times had one analysis and found that eight out of the 10 largest companies in the US do monitor at least parts of their employees. And what does that mean, monitor? It can mean anything from like tracking your keystrokes, checking if, you know, there's your face on the camera every minute, just taking like a screenshot. It can mean that they track everything you do on your computer. So they know that you are seated and you are working? Totally. And, you know, like keystroke tracking, like everything that you type, everything that you do, it can track, like, you know, what kind of tap you're on? Like, are you shopping while you're on the clock? Or are you actually doing the work? Right? Like it can actually record all of those things. And then some of these things are like, if you start printing a lot out of the ordinary or you move files around, it might mean that you are at flight risk of leaving your job. You might be leaking company information. Like all of these signals can then be sent to IT and legal and other places in the company to start investigating or looking what's up. And in the US, like, nothing that happens on a work computer is private. You have no expectation of privacy. That is case law again and again. So everything that you do on that computer belongs to the company and the company does not have to tell the employees that they're monitoring you. I mean, it's interesting to me, Hilka, because, you know, as you point out, we'd heard for years that people who work in warehouses or call centers or drivers who are, you know, it's easy to measure have they completed the task that we have hired for them. But now you say that the technology is going into where it's a little more difficult to measure sort of employees who work from home, contract lawyers, healthcare workers. Are the tools different? What is, describe for us what is sort of changing? Yeah. It was just like very easy. The technology was already there to check in warehouses, right? Like sort of like how many items do you put in the box within an hour or whatever? And, you know, there were these algorithms that the company used. And with drivers, you can check GPS data. But now we can record a lot of signals on your computer. So you can check, you know, I interviewed someone, Emily Smith, and she was a medical coder. So she had to look at a lot of medical files and read through them and then sort of code them, right? And that is something that was tracked. So if she was like too long in a medical file, even if it was a long file, she got dinged, right? Or her boss would like message her and say, like, okay, get now read some files that are shorter so you get like your, your count up. So everything was checked. And if she, you know, she had 15 minutes break time and she said, sometimes she said a timer because she needed to go to the bathroom and get a snack. And, you know, one time she took her longer than 15 minutes and she immediately got a call from a boss, like asking what's up. And like she was always worried that like the little green dot would go to a red dot. That means that she's like not online, right? And she would like get penalized. Her company would put out like a report every day who were the most efficient medical coders that day and who were not. And she was like, God beware that you're not in the bottom. And I mean, she like developed anxiety and like about this. And she knew she was tracked, right? A lot of people don't know that they're being tracked. So like some people who were aware of this, they have two computers, right? At the workstation, they have their own and then a private one if they want to order something because they know they might get in trouble. You said that there's like this whole niche industry. Oh, productivity theater? Yes. Can you give me some examples of that? Humans are very creative. So, you know, some humans may figure out like, whoa, they're tracking me, right? Like, you know, like they check if like the little green dot is on for active, right? Like, you know, sort of super basic things. And, you know, you might like check in on Slack at seven 30 in the morning, be like, Hey, I'm at my desk working really hard sending a message to everyone. And then you turn around and take your dog for a walk or like do a load of washing or something and like come back at 10. But, you know, you sort of put up this myth that you're already at your desk and working. There are people that like go to meetings just to show you their face has actually nothing relevant with their job. Um, they just want to show like, Hey, I'm working. I'm here. I'm engaged, right? Like, so we have this productivity theater and what like, I think it was Microsoft found out that like a lot of people engage about an hour a day on productivity theater and actually not doing work, but just pretending to work. And some people even have like, uh, you know, they're called mouse jigglers. Um, the old way they actually would jiggle the mouse. So you would never go into sleep mode, right? Like you would always look like you're active because they were monitoring whether people were using their mouse as a way to see if they were if they were working, right? Okay. Yeah. Um, that is a proxy for, for working. Um, and now you can get these like little USB things and they're just sort of, uh, digitally wiggle your mouse. I guess. Oh my God. Are you serious? Yes. Um, they're all kinds of way. Like a human ingenuity, uh, never stops. Um, and like a way to like trick the algorithms, but we also see now it's a little bit harder to understand like, well, what makes you successful as a lawyer, for example, like, is it the briefs that you file? Like the, the clients call that you have the meeting, it's like really up in the air, right? It's actually hard to say a lot of times what makes you a successful, uh, person in your job. And often like there's different people that have different ways of doing it. But what we see now is like sort of, uh, signals that are being recorded and you're being compared to your like next neighbor. So for example, you were like the vice president, um, of, I don't know, communication in the U S and you probably have a counterpart in Europe, for example. Um, so one of you is like, more successful. And then whatever my counterpart in Europe does, I get compared to like, well, that person who will be deemed as successful since 500 emails a day. Why aren't you sending 500 emails a day? They're doing this, they're closing tickets within 10 seconds. They're only spending five minute per patient and they have better outcomes. Why are you spending 10 minutes with patients? Like we can track all of that. And the question becomes like, if that's what the measure of success is, then who are we to, I mean, that's what the company has to decide, right? I suppose. I mean, I guess so. I think we do know though that, um, first of all, like we know that at least to a lot of productivity theater, if people can avoid it, right? They come up with these like theatrics. So it's actually doesn't lead to, um, more focused, productive work. So you're saying we don't, uh, 500 emails. Yeah, right. Yeah. You know, um, so it's, it's not actually like leading to that. We know it leads to like a lot of anxiety and elite sort of like has this like weird contradiction, right? That a lot of companies or CEOs feel like, you know, in the age of AI, it's like critical thinking, creative thinking out of the box. Um, that will, that I want for my employees that will be like, that's what humans bring to the table. But then in reality, we just track your, your emails and sort of mundane things that we can track because, you know, we can't really track critical thinking skills or your creative output. And, uh, you know, we can only often only track signals that happen on your computer, right? Like when I tracked myself, I, I wasn't very productive because I spent enormous amounts of time on the phone as a reporter. And that doesn't get tracked by any of these tools, right? Unless I would do it through the computer. Um, and, you know, all those kinds of things, like what is with like network? Working team building, like probably doesn't happen always on the computer. That gets unnoticed, right? And suddenly, uh, humans start just following basically orders, quote unquote, right? Like, okay, I'm going to just send 500 emails. If they, uh, lead to anything, it doesn't matter. I just have to like follow the metrics. And it's not always the metrics, uh, that lead to success, right? And in fact, we know that people are not more productive when they're being surveyed. We know that they're less productive. My other question is like, there are ways that I could see potentially it, this sort of tracking be useful. I mean, there are, uh, glasses that drivers can wear that alerts them. If they notice the driver's getting sleepy and they should pull over and take a break. Um, I've done a lot of research into physical movement breaks and how that enhances actually people's productivity. I love the idea of something on your screen being like, you are not paying attention. I can tell by your eye movements, go take a break. But the potential for that to be used in a creepy, non, uh, nurturing way. Yeah. Right. I mean, you know, I wish that we would have AI that would help people, um, like job interviews, if it a, if a hiring manager would actually have AI that would sort of give them feedback, like afterwards, maybe it's saying like, Hey, you really shouldn't talk about like your high school and how you met people. And like have chit chat. You really should stay with like the structured interview questions or something like that, like sort of coaches us. Well, that's interesting because I have a friend who's using actually a new product. I think she's testing it for someone that she records the meetings that she goes to and then she has the AI and analyze it and tell her whether she was supportive of her colleagues, whether she made gave constructive feedback. Um, that already exists. Like I can, I can, I can use it. You know, we've seen this before. Like I can check as a company, like, are you talking over people? Do you let people speak? Are you bullying meetings? Um, you know, that kind of analysis. And I think it can be helpful for people. Um, but I think what we see, uh, when push comes to shove, once this data exists, company use it for something that wasn't intended. So for example, like there's a company that checked, uh, sort of key card, uh, swipe ins of people for promotions. They wanted to see how many hours you are at the job and promote people based on that. We obviously all know that that is a very bad proxy for success, right? Like just because you're 10 hours in the office doesn't mean you're successful, right? But that's what they used. Um, but then during the pandemic, they needed to do layoffs and what data did they turn to to, uh, as part of the decision making was the key card data, right? Cause they wanted to see the people who are less hours in the office, they should get penalized because they're whatever lazy and, you know, they're not doing their job. They're not productive. Um, where it gets starting, like, you know, it could be that you were sick that week, or maybe you left early cause you're sick. Maybe you had a family emergency, right? You couldn't, you know, and we see that people with disabilities and caregivers, often women, more often women than men in this country, they get penalized because they can't actually be 10, 12 hours idling away at their off in the office. They have caregiving obligations and they might get penalized. In a minute, more with journalist Hilka Sheldman and some very specific tips for job applicants. What you can do to make the algorithm notice your resume. I'm Manusha Zamorodi and you're listening to the Ted Radio Hour from NPR. Stay with us. It's the Ted Radio Hour from NPR. I'm Manusha Zamorodi. We've been spending the hour with the author of the book, The Algorithm, Hilka Sheldman. She's been doing investigative journalism into AI in the workplace. In her book, she outlines how AI tools are dramatically changing how people get hired, how they're surveilled at work, and ultimately why they may be fired. A lot of companies have flight risk predictions that sort of predict like who of your employees is at risk of leaving in the next 12 months. Like what am I going to do as a hiring manager if it's predicted that you're going to leave? Am I going to give you like a raise and then somebody else not a raise? And am I going to put you forward to like a leadership thing or not? Like there's all kinds of questions and these are literally predictions. It doesn't mean that it comes true. And the reality is that if you're going to move jobs, sure, probably like sort of signals that you haven't been promoted in a few years, totally a part of this. But it could also be that you want to move cross-country because of your spouse that has nothing to do with your job or any signals that you create in the job. Right? It's like humans have numerous reasons why they may change a job or not. But this could have material effects in your job if your manager knows that you're at a vis-à-flight risk. Right? They might also start looking at like you're printing history, you file moving history, like what is happening here? And then we see this like as companies move towards, you know, layoffs and firing, they do take some of these sort of data inferences as part of the decision making or some of them actually might use it wholeheartedly. Right? If you get under the productivity algorithm in a warehouse, you might get fired because of that. And that is purely an algorithmic decision at that point. Right? Maybe there's a human oversight to double check or, you know, you didn't deliver as many packages as we had predicted on your GPS, you know, over these 50 days or whatever. And we're going to let you go. So we'll see more and more of that as we gather more data. Some of that data can be used for layoff decisions. That's also happening at the big tech companies, right? That when they do a round of layoffs, they start with anyone who has scored under a certain achievement, I suppose, is I don't know what the right phrase is, anyone who has scored below a certain accomplishment rating, that that's just the first layer to go? I don't know that for sure. Right? Like we know very little what is being used inside these companies. All of the data that is generated by humans in the workplace is really most of it, if not all of it, has biased decision making in there, right? Like we know from like social science that like in performance review, women and people of color are usually rated less successful because of their gender and their ethnicity. And we all know these examples, right? Like a woman who is assertive might actually be penalized for that, right? Where men, they're like, wow, what a great leadership skills for women, they're bossy, they're mean. So like all of this data, which is there and companies like to use for these kind of people analytics or other tools is biased. So then the bias now gets objectified, quote unquote, in these tools and gets buried in the math of it. And it looks like a perfect score. It looks like, wow, this tool perfectly scored employee A at 78.65%. Looks mathematically correct. And it's just like the allure of math and the ranking is just, I mean, humans can't get out of it. I do want to ask you, you know, what's changed since you first started doing this research is that now there really is a sense that we've reached a tipping point where there's a lot of talk about fear of AI and the robots who are coming for your jobs. On the flip side, there are people saying, well, what will what we'll need to hire for are people who have the good human traits that they have good judgment and can make decisions or are provide caretaking to people that they the human skills are going to be more important than ever. Creativity, how do you see this next wave of AI not just informing the workplace, but truly entering and doing the work? Yeah, that's a tough question. Like it's really hard to predict the future, right? Like I do think that like we are at a tipping point where we have like AI tools generating your your resumes. We have AI avatars coming for your job interviews. We have avatars interviewing other avatars for jobs like, you know, it's AI versus AI. A lot of applicants joke online that like, may the best AI win. Like, what are we really measuring here? Right? Like that is like and hiring managers and recruiters do feel this like very acutely, right? Like, and we also know from surveys like where leadership and companies said when they use AI tools, they know that almost 90% of the respondents said that they know that the AI tool rejects qualified candidates. So they know this doesn't work. But they still, their alerts still there and it does reduce the workload. And then we see this idea, right? Like that a lot of like companies want to hire for soft skills, right? Like sort of being creative, like team workers, collaborators, like overseers, people who can like sort of keep a cool head when they have to oversee and supervise like, I don't know how many hundreds of AI agents, right? That we are moving into the space now. But the question is like, we actually have no way of measuring that in the hiring process. Like if somebody always joke with like people, if you can find a way to measure soft skills objectively in humans, like that is a billion dollar enterprise. Like you would shuffle the money. Which is why companies I guess are still going to keep trying. Yeah, I think so. We really haven't found a way to do it. So it's, you know, hiring is very difficult. And so I think like, what are we moving towards? Like we are moving towards like, I think there is this prediction that some people say this like doomsday idea that like the computers that take our jobs, like, yes, they're going to take some jobs. But for most of us, what it looks like is like, they're going to take some parts of our jobs, but not other parts of our job, because some parts of our jobs can actually be automated. And that's great. You know, when I started as a journalist, like first day I had to take notes and had to like learn how to write really quickly, then, you know, we started to have recordings. Great. But you still had to transcribe every word in them. Thank God we have transcription software, right? It makes my life so much easier. So we'll see, I think, sort of a lot of synergy between humans and AI tools going forward. I think we are moving away from this like dystopian idea that like AI will take over all our jobs, although it will take over some jobs, right? The stuff that is like sort of very repetitive, like where you put in, you know, maybe clerical work, or you put in numbers all day, like probably some AI tools can take that over. Some AI is really good with like copy editing or translation. Like often you still want a human in the loop, but you probably, you know, if I was a publishing house, you know, I think a lot of publishing houses start using AI translation first, and then have a human look over it to make sure it's all correct versus like having a human do all of the work. So I think we'll see, we'll see a lot of that, and that will change some of the jobs, but are they all going away? And we all going to have cocktails on the beach? No. In fact, right now our workload has sort of increased, but it's really hard to predict the future, right? And I think we often feel like, oh, there's going to be this like, AGI, this like, you know, super intelligence is going to take over. And I'm not qualified to really give any predictions upon that. But what I'm going to say is like for like the AI that we see in the workplace, this is very basic AI tools and algorithmic tools, and a lot of them really don't work. So we shouldn't use these tools on high risk jobs where we actually make decisions, are you going to keep your job? Am I going to hire you? Right? Like, do you want to use AI for spam filter? Absolutely. And if it doesn't work, I'll find another spam filter. It's actually great with spam filter. It's great for a lot of things. I use AI all the time, but I wouldn't use it on like high risk decision making unless I know that it's a validated tool and that it actually works. And not because it makes my life easier. So, you know, hearing our conversation, it's going to be sort of dystopian and almost like people feel like, what am I supposed to do? How am I even going to do anything about this? Or maybe it's somebody who's retired and is thinking, I'm so glad I am not in the workforce right now. But I love how you say in the book, by exploring how AI is being used in our workplaces, we will be better prepared to question its usage when we encounter it everywhere else. Tell me more about that idea. Exactly. Yeah. Yeah. I mean, I do think that like, once you understand like how AI works in like in hiring or in people analytics, like it's often the same way that these tools are built, right? So, you can question other tools that like your company, like we are going to be AI experts, right? Like, I think certain things you can take away of like, here's how you can like ask skeptical questions about training data and like how a tool was built, like how was it validated, how were the results tested. Also, like, you know, here's how you can do like a pilot phase if you want to use an AI, right? Like pilot it, test it, try it. Like, I think that's sort of like what I want everyone to start is like sort of experimenting, vetting the tools yourself, because you're all going to get pitched. Some of these tools, you maybe want to use them for your own work. And I think that's how we make the world a better place, right? Like, I think AI can be good for all of us if we throw out the bad tools, right? We don't want to create more damage, but we want to use the tools that actually work. So we all need to be a little bit more educated to actually vet out the tools that don't work. And I feel like we need to know what goes wrong and what goes right in these tools to actually do that. Unfortunately, as a job applicant, you are a little bit at the whims of companies, right, that like sent you to use these tools. And I call job applicants sort of forced consumers of the technology, right? Because if you want the job, and somebody sends you a link and say like, hey, you have to do this one way video interview in the next 48 hours, if you want to be considered, and you're going to say no, if you want the job, most likely not because you really want the job. So you're not going to say no, right? So you're forced to consume it and you don't know much about the tool. I do think that it is possible to have better tools and build better tools and demanding more transparency from the vendors that you buy AI tools from or use AI tools. So that we make sure if we make high stakes decisions on humans, we know what kind of decisions we make. So at this point, you may be wondering, what can I do if I'm applying for a job? What are some tactics that may help me get the position? Because boy, does it seem hard? Well, first, Toka says, make it easy for the algorithm to see that you're qualified. Keep your resume boring. Traditionally, I was always told to try to stand out with your resume. Like you may be up to rows. You have cool different colors in different ways. And none of that is true in the AI age, right? To make your resume machine readable, don't include images, double columns, or any special characters. Any of that, a computer may not be able to correctly ingest that information. And I was shocked when I talked to the CEO of Ziprecuder who said, you know, so much information gets lost. Because when computers ingest resumes, they put your work experience in the wrong fields or other things, it's shocking. So use a super common template. You want to do everything to not confuse an algorithm. That leads us to tip number two. Write clearly and simply and include specific data. Use short crisp sentences. Be declarative and quantify achievements. Like you didn't save money. You saved the company, I don't know, five million dollars or whatever it is, right? Like quantify, quantify, quantify. Number three, the AI scanners will be looking for key words from the job description. So use them in your application. But don't just copy the description word for word. Make sure you don't have 100% overlap because then some AI tools will throw you out. Because they think you just copied the job description, right? Like you want like 80, 90% overlap. Tip number four, list all your skills, even the most obvious ones. A lot of companies are moving towards skills-based hiring. So they want to instead of like sort of more arbitrary qualifications, like having a bachelor's degree, they really want to now look at like, well, do you have the skills to do the job? So you want to make sure that you list all of your skills, even the soft skills, right? Like, you know, that you're a collaborator, you're a team builder. So like list everything and make sure, you know, a lot of people now put skills in a separate section and have bullet points so like a machine could easily ingest it. Tip five, and this one's kind of funny. Don't be afraid to use AI to give your resume the shine it may need. You know, English is a second language for me. Like, a large language models and chatbots, they're great at like polishing your resume, making sure all the grammar is on point. Like, you can use that to your advantage. And finally, tip six, skip the big job platforms, apply directly to the company itself on its website whenever you can. Yeah, what I found out when I talked to a bunch of recruiters that said like, the first step they do when they look at an applicant pool, they look at the company's sort of internal system, right? Like the website, you go to their company's website and apply directly. That's the first pool of people they look at. Wow, so they're using that as a way to call the numbers to begin with. Did you take the initiative to go directly to their site? It shows initiative or something unclear or it's more easily accessible data. I have no idea. But like, you know, it was mentioned to me so many times that I was like, I think I want to give this as a tip. What I always have heard when I was growing up was, it's who you know. Does that matter anymore? The human relationships, the networking? Oh, totally, totally. In fact, like, if an employee currently at a company recommends you, you often bypass. Sort of the first phases of rejection by AI tools and you like land directly on the desktop for a recruiter or hiring manager. So yes, try to use any of that. It's the same as like, you know, trying to message a recruiter, right? You want to get in front of the human who hopefully reads your message and puts you on the yes pile, like for the next round of hiring. So I think we see this again and again. I was surprised when I talked to the former vice president of product management at LinkedIn, who told me that like, if you engage with recruiters on job platforms, like that actually signals the AI that you're actively looking for a job and they will recommend you for more opportunities, like they will rank you a little bit higher on the hiring manager side because the AI tool isn't optimized to find the most qualified people. It is optimized to find the most qualified people likely to apply. So nobody put me in front of a recruiter because I haven't applied in years and I love my job. I'm not likely to actually send a resume. I would find people that are likely to apply and engaging with a recruiter or like hiring manager on LinkedIn or wherever is actually a signal. And I was like, wow, I didn't know. So now I like, actually, I used to never answer recruiters and now I actually email them back and just send them a kind message. So I was like, I don't want to be like, you know, the dinosaur that never applies. Like all of these signals suddenly can become meaningful in this day and age. That was investigative journalist Hilka Schellman. She's a professor of journalism at NYU and the author of The Algorithm. How AI decides who gets hired, monitored and promoted and fired and why we need to fight back now. You can watch her talk at Ted.com. Thank you so much for listening to our show today. If you got something out of it, if it was helpful in any way, please leave us a comment on Spotify. You can also email us at TedradioHauer at npr.org. We read every comment and email. We love hearing from you. You can also leave us a rating or review on Apple means a lot to us. This episode was produced by James De La Hucey and edited by San Osmech Gimpore and me. Our production staff at NPR also includes Katie Bonta-Leone, Rachel Faulkner-White, Matthew Cloutier, Fiona Geirin, Harsha Nahada and Phoebe Lett. Our executive producer is Irene Nuguchi. Our audio engineer was Jimmy Kealy. Our theme music was written by Romteen Arublewe. Our partners at Ted are Chris Anderson, Roxanne Hilash and Daniela Belarezzo. I'm Manush Zamorodi and you have been listening to the Ted Radio Hour from NPR. Support for NPR and the following message come from the Kaufman Foundation, providing access to opportunities that help people achieve financial stability, upward mobility and economic prosperity regardless of race, gender or geography. Kaufman.org