Summary
Dr. Joy Buolamwini, computer scientist and digital activist, discusses her groundbreaking research on algorithmic bias in facial recognition systems, her journey founding the Algorithmic Justice League, and her advocacy for algorithmic accountability and consent culture in an increasingly AI-mediated society.
Insights
- Accurate AI systems can enable mass surveillance; the question isn't just 'does it work?' but 'what technologies do we want in society?'
- Intersectional analysis reveals stark performance disparities: some systems show 34% error gaps between best and worst performing demographic groups
- Opt-out mechanisms for facial recognition at airports are deliberately obscured through design, requiring active resistance and awareness campaigns
- AI hype and FOMO drive organizations to replace human workers with inadequate systems, creating real-world harm (e.g., NEDA chatbot worsening eating disorders)
- Storytelling and poetry are as critical as research for moving algorithmic justice work into public consciousness and policy change
Trends
Shift from accuracy-focused AI metrics to societal impact and consent-based frameworks for technology deploymentMunicipal and state-level AI regulation gaining traction as federal legislation stalls; risk of federal preemption rolling back protectionsAlways-on recording devices (pendants, glasses) entering consumer market without clear consent mechanisms or privacy safeguardsIntersectional bias auditing becoming standard methodology for exposing disparities across multiple demographic axes simultaneouslyCorporate pushback and delay tactics against bias research intensifying as AI systems become embedded in high-stakes decisions (hiring, lending, healthcare)Algorithmic Justice League model expanding globally; international adoption of freedom-focused campaigns and auditing methodologiesIntegration of arts, poetry, and performance into technical research communication for broader public engagement and policy influenceExpansion of facial recognition infrastructure in government (TSA planning 400 airports) outpacing public awareness and consent mechanisms
Topics
Facial Recognition Bias and Accuracy DisparitiesAlgorithmic Justice and Equity in AI SystemsIntersectional Analysis in AI AuditingConsent Culture and Privacy RightsMass Surveillance Infrastructure and Opt-Out MechanismsAI Regulation at Municipal, State, and Federal LevelsGender and Racial Bias in Computer VisionAI Ethics and Accountability FrameworksCorporate Responsibility and Pushback Against ResearchHuman-Centered AI and Technology DesignAI in High-Stakes Decision Making (Hiring, Lending, Healthcare)Storytelling and Arts in Technology AdvocacyAlgorithmic Auditing MethodologiesAI Workforce Displacement and Human LaborAlways-On Recording Devices and Consent
Companies
Microsoft
Gender Shades study found near-perfect accuracy on lighter male faces but 34% error rate on darker female faces
Amazon
Facial recognition system tested in Gender Shades research; company aggressively disputed findings while bidding for ...
IBM
Facial recognition system audited in Gender Shades; showed significant performance gaps across demographic groups
Google
Google Photos labeled dark-skinned people as gorillas; company fixed issue by removing gorilla label rather than impr...
Face++
Chinese facial recognition company tested in Gender Shades research; showed better performance on darker male faces
MIT Media Lab
Buolamwini's research home where she conducted facial recognition bias work and founded Algorithmic Justice League
Carter Center
Buolamwini's senior capstone project partner; deployed her health data system to 17 million people in Ethiopia
iSchool Zambia
Partner organization for Fulbright project; Buolamwini trained Zambian students to develop mobile apps addressing loc...
Georgia Institute of Technology
Buolamwini's undergraduate institution where she first discovered facial recognition bias with social robot Simon
Oxford University
Rhodes Scholar institution; now hosting Buolamwini as inaugural fellow for ethics and AI institute
TSA (Transportation Security Administration)
Deploying facial recognition at airport checkpoints; Algorithmic Justice League's Freedom Flyers campaign targets opt...
Department of Homeland Security
Committed to adding opt-out language to TSA facial recognition kiosks following Algorithmic Justice League advocacy
NEDA (National Eating Disorder Association)
Fired human staff and replaced with AI chatbot that worsened eating disorder advice; demonstrates AI replacement harms
Limitless
Always-on recording pendant device raising consent and privacy concerns in emerging consumer AI hardware
People
Dr. Joy Buolamwini
Computer scientist, digital activist, and founder of Algorithmic Justice League; conducted Gender Shades facial recog...
Debbie Millman
Host of Design Matters podcast; conducted the live interview with Dr. Buolamwini at WBUR Festival
Ethan Zuckerman
Director of MIT Media Lab's Center for Civic Media; supervised Buolamwini's research assistantship
Kimberly Crenshaw
Legal scholar whose intersectional analysis framework influenced Buolamwini's approach to auditing algorithmic bias
Dr. Matt Wood
Amazon corporate VP who publicly attempted to discredit Gender Shades research findings
Jill Connell
High school computer science teacher in Memphis who taught Buolamwini three levels of CS classes
Secretary Mayorkas
DHS Secretary who met with Buolamwini to discuss TSA facial recognition opt-out and traveler experiences
Sam Altman
Referenced in context of future computing devices and always-on recording technology concerns
Johnny Ive
Referenced in context of future computing devices and always-on recording technology concerns
Allie Miller
AI thought leader whose LinkedIn post illustrated privacy violations from always-on AI recording devices
Quotes
"It wasn't as easy as saying, okay, let's make more inclusive data sets. And when we have more inclusive data sets, we'll have more accurate facial recognition. But accurate systems can be abused. And so the analysis had to be not just how well does the technology work, but what kind of technologies do we want in society in the first place?"
Dr. Joy Buolamwini
"You actually have the right to opt out, but most people don't know, and it's not surprising because you go there and they say, step up to the camera."
Dr. Joy Buolamwini
"If we don't opt out and if we don't resist, then that narrative persists that people want this, which they'll use. And they claim right now that they only put the facial recognition on some of these checkpoints and that they don't have it on the overall system. But all of this is just one line of code from being mass surveillance if we don't actually resist."
Dr. Joy Buolamwini
"I think about this is why stories matter so much. It's not even the power of AI, but the power of the stories we tell about AI."
Dr. Joy Buolamwini
"The people have a voice and a choice. When defiant melodies harmonize to elevate human life, dignity and rights, the victory is ours."
Dr. Joy Buolamwini
Full Transcript
Prime Video offers the best in entertainment. This should be fun. Jason Momoa and Dave Bautista go completely down in the hilarious new action film The Wrecking Crew. Inbegrepen by Prime. Yeah, I'm pumped. Find the new Game of Thrones series A Night of the Seven Kingdoms. Based on the bestseller of George R.R. Martin. Look by being a member of HBO Max. So be brave, be just. So whatever you want to find, Prime Video. Here you look at everything. Abonnement is revised. In-house conferencing is 18+. Hi, I'm Frances Frey. And I'm Anne Morris. And we are the hosts of a new TED podcast called Fixable. We've helped leaders at some of the world's most competitive companies solve all kinds of problems. On our show, we'll pull back the curtain and give you the type of honest, unfiltered advice we usually reserve for top executives. Maybe you have a co-worker with boundary issues. Or you want to know how to inspire and motivate your team. No problem is too big or too small. Give us a call and we'll help you solve the problems you're stuck on. Find Fixable wherever you listen to podcasts. Head to the link in the description for more. It wasn't as easy as saying, okay, let's make more inclusive data sets. And when we have more inclusive data sets, we'll have more accurate facial recognition. But accurate systems can be abused. And so the analysis had to be not just how well does the technology work, but what kind of technologies do we want in society in the first place? from the ted audio collective this is design matters with debbie millman on design matters debbie talks with some of the most creative people in the world about what they do how they got to be who they are and what they're thinking about and working on on this episode computer scientist and digital activist joy bulamwini talks about her career and about facial recognition technology in the airport. You actually have the right to opt out, but most people don't know, and it's not surprising because you go there and they say, step up to the camera. Dr. Joy Bulamwini is a computer scientist and a poet of code who uses art and research to illuminate the social implications of artificial intelligence. She founded the Algorithmic Justice League to create a world with more equitable and accountable technology. Her MIT thesis methodology uncovered large racial and gender bias in the world's largest and most powerful technology companies. Dr. Joy's journey is depicted in the critically acclaimed documentary Coded Bias, of which she is now on a five year anniversary world tour. And the documentary sheds light on threats artificial intelligence poses to civil rights and democracy. And she is also the author of the bestselling book, Unmasking AI, My Mission to Protect What is Human in a World of Machines. Dr. Joy Bolamwini, welcome to this very special live episode of Design Matters at the WBUR Festival. You know, sometimes dreams come true and I'm living one right now. So I'm so happy to be here. Thank you. Thank you. Let's begin with baby joy. Baby joy. Okay. You were born in Edmonton, Alberta, just as your father was finishing his PhD, and you've described yourself as a daughter of art and science. What do you remember most from that unique collision of your mother's paints and your father's pipettes? That's a great question. In the book, I say how my mother asked questions of colors. My dad asked questions of cells. And in that exploration, I started asking questions of computers. And so for me, I literally grew up with art and science as companions through my parents. And in Oxford, Mississippi, that meant going to my dad's lab and feeding cancer cells, looking at squiggles on computers, learning later these were graphs and so forth, flow cytometry and that kind of thing. And then my mom, I just thought every weekend you go to art galleries and pitch paintings. I didn't realize that's what really was going on. And so I grew up with those worlds and it felt very much like an invitation to be creative, whether through scientific inquiry or artistic inquiry or for me, playtime. Right. You know, so I think that was a true gift that I see now in the way that I do my work as a poet of code. You spent your early years in Ghana before moving to Oxford, Mississippi. What were the values that shaped you most in those formative years of in-betweenness? I think the first thing is my first language being chi. And so I used to have, I have all kinds of accents. I code switch often. But when I first came to the United States and I was in Oxford, Mississippi, they actually put me in speech therapy. why they didn't realize English was my second they should have asked more questions you know and I and maybe I really did need to be in there but we can deconstruct that later I recently returned back to Ghana last year after 30 years and I don't look that old but it was three decades since I'd been there and it was the best homecoming I could have imagined In what way? One of the things that I noticed is all of my relatives were looking at me and I was looking at them, but none of us wanted to be rude or stare. So we would like look at each other through reflections and like just catch an eye. So it felt like I was time traveling when I would see my aunts and uncles. I was like, oh, that's what my brother is going to look like in a few years. Or, oh, that thing I thought my dad was just my dad. I see it in all of his siblings. So that kind of way. And then also connecting with my young cousins. I think they range from about age eight to 26. So I'm on the older end of the cousins within my more immediate family. And so it felt like a very warm embrace, a long awaited embrace. Given you were so young when you left Ghana to go to Mississippi, did your family share stories about Ghanaian excellence or legacy that helped shape your own sense of possibility? Yes, I'm third generation PhD. So my grandfather was a dean of a school of pharmacy, Kwame Nkrumah University Science and Tech. And so we have some Ghanians in the audience here as well. They went to good schools too, even if it wasn't science and tech, which is all solid. And so growing up, it wasn't like, oh, be excellent or that kind. It was just around us as this is our family legacy. There's this value of education. There's this curiosity. Got a library card very early, and we used to live within walking distance from the Oxford Public Library. So my brother would roll me there in a wagon and he introduced me to the boxcar children and Hardy Boys and that kind of thing. I have this vision of him driving you back in the wagon with stacks of books around you. Well, it was so fun because right now the documentary Coded Bias, we're on world tour. And we just did a screening in Oxford, Mississippi at that same public library. And on the stage, I used to watch magic shows and puppet shows. And then they were showing Coded Bias. And my nieces were there. My brother was there. One of my childhood neighborhood friends was there as well. So kids I grew up playing with and eating honeysuckles and, you know, that kind of deal. So it was really a nice full circle moment. And then a friend I had in Oxford, England, where I later studied, she had now become a professor at Ole Miss. And she was there with her baby and her dog and her husband. So it was this kind of culmination of many strands, different elements of my life there in Oxford, Mississippi. So the Southern Twang had come out just a little bit, you know, to remind where I came from. But it was all good. It was a good time. There's a lot of kismet in your life, and we'll get to that. As a teenager in Memphis, you were building websites to cover your basketball team's uniform costs, writing Java games, pole vaulting and skateboarding. What did technology represent to you then as you were forging a path as an athlete? Yeah, well, it was a means to cover my dues. So instead of paying for the sporting dues, it was, OK, let me make a social media, a social network for the track team. Let me make a website for the basketball team because I'm going to be warming the bench. Right. I might as well contribute in some other kind of way. My first website was really for my Latin club. Of course it was. Part of the National Junior Classic League. As one does. And I spent so much time on that website. It had animations and I was coding in Flash. And it was I used to always say it was the top 10 in the country. So probably meant it was number nine, but whatever, you know. So that was a little bit of encouragement. And then I was so fortunate now that I look back at it. At the time when I was in Memphis, Tennessee, I had the opportunity to take three different computer science classes as a high schooler. And part of the reason was because of my teacher, Ms. Jill Connell, and she had this one room kind of schoolhouse where on each three walls, and then her wall was on the main wall, her desk, she would teach the first version of computer science, the first class, she'd do the second one, and then you'd have the AP class. So I circled those walls, right, for three years. And that really gave me a great basis. And I was looking back at some of the data. And I think at the time I took the advanced placement computer science test, only five kids in the state took it. And so that I was in that specific school with a young teacher eager to make it work, you know, even if it meant having to come up with three different lessons in that same period was really, really fortunate. And I used to spend the nerd I am. I would spend my lunch period there coding Lego robots. And later when I would come to MIT, I would work in the lifelong kindergarten group that helped create and establish the Lego Mindstorms as well. So I always shout out to Jill Connell and follow her Facebook posts and her kids now and all of that. Well, you just said that you were a tech nerd or a computer nerd. Definitely. But you were also a coder, a creative and an athlete. So how did how how did your friend group view you? I was always in between spaces. And so for my parents, as long as I got straight A's, I could do what I wanted. Right. So I was like, cool, I'll get the grades and let me try basketball. Let me try track and field. I didn't actually want to do cross country. I was recruited to cross country. And then I remember running my first mile throwing up in the coach like, I think you got this. Do I? So that's how I ended up doing three different sports in high school. But I really fell in love with pole vaulting. I know. The most of all of those things. And didn't you have aspirations to go to the Olympics? I mean, if you don't do it, you've got to do it, right? Am I? No pun intended. I couldn't vault higher. You know, you don't start doing a full pole vault. You start in the grass, then the sand, then you move to the pit. So before I'd even cleared anything in the grass, I was already saying Olympics, right? So that just tends to be my mentality to aim high. You attended the Georgia Institute of Technology for your undergrad degree. Yellow Jackets. But you first considered majoring in international affairs and biomedical engineering. Yes. You eventually transitioned to computer science. What did you envision doing professionally at that time? Oh, I mean, there were so many things I wanted to do. My first career aspiration was to be a professional skateboarder. Did that come before or after pole vaulting? Before pole vaulting. So I started skateboarding when I was around 11 years old. I watched the Goofy movie. And in the Goofy movie, they made skateboarding look so cool. So I got like the cheapest skateboard you could ever get from Walmart. It had this purple cobra on the back. Later, I got a much better skateboard. But that's how I started. So when we moved from Oxford, Mississippi to Memphis, Tennessee, that summer I started learning how to skateboard, which meant scraping my ankles a lot, mainly. So you wanted to be a professional skateboarder, but you were majoring in computer science. So this was when I was 12. And then the reason I decided not to pursue that career path is I was introduced to the gender pay gap. so I was looking at skateboarding competitions and I was looking at what the men got paid and what the women got paid so the women's prize was like what the 13th place guy got in terms of first place I was like you know what I might need to try something else so that's when I got off the skateboard path for a little bit I returned to it later in grad school and do you still every so often these aren't the right shoes but if you have the right shoes I'll try to pull out a 180 boneless I can still do that from time to time. In your third year at Georgia Tech, you were working with a social robot named Simon. Yes. And your assignment with Simon was to see if you could have the robot engage in a social interaction with a human. You created a project called Peekaboo Simon. Can you talk about that project? Yes. So the idea with peekaboo Simon was to program the robot to do a simple turn taking game, right? So you cover your face, you uncover your face, you say peekaboo. So that's what I was doing. The problem is peekaboo doesn't work if your partner can't see you. My robot was not detecting my dark skin face. So I'm looking at my code. I'm like, I think the code is right. So what's wrong? So I didn't have that much time. And I had a light skin roommate, red hair, green eyes, pale skin. Oh, you're perfect. So I used her to test it and it was working on her. So when it came to do the demo, I just made sure somebody with light skin tested it. And so this was around 2011 when I was a junior at Georgia Tech Your senior capstone project at Georgia Tech was an experiment with the Carter Center And you traveled to Ethiopia to do that work And there you piloted a health data system in Ethiopia that eventually reached 17 million people. 17 million people. What type of data system and how did you do that? Yes. So that year, I think I kept going around saying, I want to make an impact in African nations as kind of my little tagline when people would ask, what do you want to do? I'm Joy, I want to make an impact in African nations. Oh, that's cute. So I was doing that and there was an invitation at Georgia Tech to come to some, I think it was a tech and health summit at 8am. Like 8am? I don't know about this, right? But that year, I had my show up, speak up standup. It was just like my little mantra for the year. So I said, okay, I'm going to show up. So 8am, I show up. I'm going to speak up. So I speak up, I talk about my little African nations thing. And it turned out there was an epidemiologist from the Carter Center. and they were looking for a software engineer to help them with some of their global health programs. And in particular, they did a program called Maltra in Ethiopia. And so that was their malaria and trachoma program. And they'd been doing it for a few years and now it was time for monitoring and evaluation. So they wanted to go in and take surveys and things like that. The problem is their paper-based surveys had some limitations, especially when you're trying to put in GPS coordinates. So if you get one of those digits wrong, right, you might be in a lake instead of the space you're supposed to be in. So at that time, open source was very helpful because Android had been released. And you now had these Android tablets we could program. So the opportunity was to replace their paper-based way of assessing the effectiveness of these programs into a tablet, right, into this digital form. And so that's where I came in with my computer science skills. So I was like, hmm, surveys, you could view it that way, or we're transforming data collection, right, so we can have real-time insights on the trachoma program at the time. So that's what ended up becoming my senior capstone. And we looked at their current system, transformed it to something that had taken about 30 to 90 days we could do in one day. and so that's why they wanted to continue to adopt it but I realized global health wasn't for me because while I was in the field I was making up songs about pit latrines and hygiene and things like that and that just was not the culture of the global health people at all I might need to try something else but that's yeah that was one of the formative experiences for me because at that time I was at Georgia Tech and I was gaining all of these technical skills. And yes, I was working on robots and so forth. But I also knew those robots weren't going to be in people's lives in a major way anytime soon. It might be sooner now, but then a decade ago, it wasn't happening. And this desire to make an impact with the tech skills I was gaining at Georgia Tech, that's why that year I kept saying, okay, I'm going to speak up, you know, show up, stand up, all of that. So stood up in Ethiopia. You then applied for and were awarded a Fulbright Fellowship, which took you to Lusaka, Zambia. Yes. There you taught young Zambians to make mobile apps. Yes. You partnered with iSchool Zambia and worked on the Zidu Pads. Yes. Yeah. Yeah. Tell us about that. Yes. So after the experience in Ethiopia, I started asking some questions. Like, sure, I came in with this technology that I had actually coded most of it in my childhood bedroom in Memphis, Tennessee. So it was during summer break. And because of that, I made all of these assumptions about the technology and the context that weren't true. So I was like, like for what, for example? So for example, I assumed that the internet speeds would be comparable. And so one of the features of that data collection process was you could upload it to the cloud. And then the people wherever they needed to be could get the data. But because it was very slow, we ended up realizing we needed to download the data actually on SD cards. So I needed to, here I am under mosquito net, like in Ethiopia, changing the code because we hadn't accounted for the difference in internet speeds, and things like that. Is it true that the students thought that you weren't an instructor? They never thought of you as an instructor? I take it as a compliment now. So given that had happened in Ethiopia, when I had the opportunity to do a Fulbright fellowship, I was thinking, okay, is one thing for me to come in? Yes, I am of the continent, but I'm also in some ways a Westerner kind of parachuting in, what would it look like to actually equip people to create the systems, the tools that address their own local context? context. And so that's where the Zamrise initiative came to be. And that was my Fulbright project. And then along the way, we met local skateboarders and poets. So there are a lot of side projects. I also couldn't be directly paid due to Fulbright rules. So we would barter certain things. So with the founder of iSchool Zambia, I bartered a house for the time I was in Zambia to then do training and be compensated for the apps that the students made through the training. And then they provided office space for that. So this whole creative bartering because of the constraint of not being able to be directly paid, we did an Indiegogo campaign and raised money to get the laptops for the students to learn how to code because I asked. What's an Indiegogo campaign? Oh, does anyone? Good question. So it was an alternative to Kickstarter. So crowdfunding platform. And I wanted to raise, I think we needed $10,000 or something like that. And so that allowed us to get enough laptops to then work with the students. What I had noticed with the Zedupads when I was talking to the founder of iSchool Zambia, I said, well, who developed the software? Who developed the apps? And they're like, oh, well, we used Eastern Europeans. And I asked why. They're like, oh, we're not able to find the talent we need here. So I asked, OK, if I did a program to train, would we then be able to have apps on the Zedupad? And so that's what the Zamrise program ended up doing so that when it actually came out, they could say it was also built by Zambians as well. So you've talked about the poetry scene and the skateboarding community you found in Lusaka. What was the connection for you between creation, culture and code? For me, it was like a way of life. And even now I ask myself, how can I live poetically? and how can I be expressive? And so while I was living in Zambia and I was making friends and so forth, I was like, okay, yeah, what do people do for fun? Where do you go? I remember going to a slam poetry session about subsidies. It was really interesting for me, right? Just to see how people were exploring their creativity and things like that. So none of my friends were surprised when they saw the skateboard shots while doing the Fulbright. And we're also delivering for iSchool Zambia and things like that. So I never have felt that I've had to separate those worlds. You can have the impact. You can do it with a little flair, hopefully a little bit of style and connect with the artistic communities that are there. And skateboarding is another type of art. Did you teach anyone in Zambia to pole vault? No, see, pole vaulting requires some equipment. You need the pit, you need the poles. The poles are very expensive. Even against skateboards, I learned, was tricky. So they didn't actually have a skate park at the time. And I was asking them where they were getting their supplies from, and they were getting it from South Africa. You then went to Oxford, where you made history by proposing the first Rhodes service year. What gave you the audacity to reimagine what the Rhodes experience could be? Um, so all the work I did in Ethiopia meant that I had the qualifications to get into a global health program. So I applied. And the Rhodes Scholarship covers two to three years. And so I'd used my first year to do a master's in learning and technology. And the second year, I wasn't exactly sure what I wanted to do. So I applied to global health and I got in. And I realized I really don't want to do global. I told you the culture, like my pit latrine wraps were not hitting. It had not improved. The situation had not improved since then. And so I wrote a 40 page proposal for this year of service and shared the type of visa I would need to have. I really went in and how it would build on the work I had done in Zambia and so forth. And so basically, when I sent to the Rhodes Trust, they said, don't get your hopes up, kid. We don't change much here. OK, I'm going to do it regardless. And you guys should get the credit, right? Anyways, they decided to agree to it. And so that became the first official Rhodes Scholar year of service. But many senior scholars told me they'd been doing years of service unofficially well before me. Have there been official years of service that have come after you based on that change? There have been, actually. And they tended to be other scholars who are really entrepreneurial. And I remember talking to some of the members of the trust and they were curious why not as many people had applied for it afterwards. And I was thinking, I mean, the profile of a Rhodes Scholar tends to be those who have mastered the academic game but aren't taking many risks. So I'm not so surprised that if you had the option to get an MPP or an MBA from Oxford, pretty much guaranteed, or do a year of service, most people would go for the guaranteed option. So it tended to be the crazy entrepreneurs who were open to that kind of risk anyways, right, who were doing it at the time. What did you do during that service year? Oh, so that service year, I started something called Code for Rights. And so Code for Rights was thinking through the curriculum that had been created in Zambia and actually adapting it to an Oxford context and looking at what kind of problems or issues might we apply that process to. And we ended up focusing on sexual assault on campus and creating this first responders app as well so that survivors would know what their options were. Thank you for doing that work. It's an area that I have particular interest in. You were then awarded a research assistantship at the MIT Media Lab from Ethan Zuckerman. Yes. The director of the Center for Civic Media. Now, is it true that he told you that if what you're thinking of making already exists, go elsewhere? Oh, so that was one of the Media Lab professors. That wasn't Ethan. Our group was kind of the everybody come sort of group. We had a big, wide open table. And every Thursday when we met, we would always have random guests and some regulars, you know, retirees where they would come to our table as well. So Ethan was very open. But the summer before I was in the area for my friend's wedding and I thought, let me go check out the Media Lab in case I apply there. And so I met another professor who was like, yeah, if what you're thinking of already exists, this isn't the place for you. So what was really interesting to me about Ethan's group was we were in the future factory, but we were a bit of an oddball because we wanted to talk about problems now. Right. The Center for Civic Media. So we were doing things like creating systems that would allow kids to track school lunches to see if the government was delivering on what they had promised. the Promise Tracker app, which was deployed all over Brazil. So it was a group that was a bit out of the mainstream. But then in the time I was there, 2015 to about 2022, there was also quite a bit of change happening in the U.S. where people were saying, wait a minute, maybe the research we're doing, maybe we should be thinking about its immediate real world impact, right, with the different elections and so forth. So I remember in 2016, November 2016, after Donald Trump was elected, it was really interesting in my own circles, because I'm from the South, right? So some people are like really rejoicing, and other people are devastating. And I'm seeing all of that happening. And at the Media Lab, a lot more people were asking questions of, okay, if this is happening, should I really be painting walls with my smile as my project? Or maybe there's something else I should be doing. And so I remember the day after the election, people were suddenly looking to our group, the oddball group that's talking about problems now, became the mainstream. And that's around the time I was also starting to explore the work that became the Algorithmic Justice League. And so that was a transformation from being at the margins in the future factory to being a group that was viewed as a leader within the future factory. You signed up for a course that changed the trajectory of your life. Yes. Science Fabrication. Yeah. This was specifically focused on building fantastical futuristic technologies. What did you begin to build? Yes, this is where I started to explore this idea of shape-shifting, right? And so we mentioned earlier, I'm from Ghana. I was inspired by stories of Anansi the spider, trickster spider that could also shapeshift. And I wanted to do the same, but I had a six week deadline. And so instead of shifting my shape, right, I decided to shift my reflection in a mirror. And so in that process of figuring out how do I make it look like there's a mask on my face through a mirror, that's when I started to experiment with computer vision. So first you have to detect the face. So once I got that going, I started putting different people's faces on my own. So Serena Williams was one, looked like the greatest of all time. Then I thought OK this is kind of giving me theme park where you have the cutout hole and you put it through Wouldn it be cool if when I moved it moved just like that Anyway so when I was moving it moving So to do that, I needed a camera. So I put a camera on top of the mirror, and then I needed software that would actually follow my face in the mirror. So I went online, downloaded some software that was supposed to help me do that, but it wasn't working. just like my robot, my peekaboo robot wasn't working. I was like, I think something might be off. And at the time I was experimenting, it was around Halloween time and I had a white mask for Halloween party. So when it wasn't working, I started just experimenting with things in my office. I even drew a face on my hand, held it up to the camera and it detected the face on my hand. so that's when I was like okay anything is possible so I reached for the white mask and before I even had it all the way over my face it was already detecting the white mask so here I was at MIT this epicenter of innovation that I dreamed of coming to since I was a little girl and I'm in white face to be seen coding in a white mask at MIT and so that was when it switched from Can I shapeshift like a Nazi the spider to hold up? Side quest. What is going on here? You've said that while the white mask episode was disheartening, you didn't want people at the time to think you were making everything about race or being ungrateful for rare and hard won opportunities. And you felt that you felt that speaking up had consequences. What kind of consequences did you fear? retaliation, being blacklisted, being kind of marginalized with the way my research was perceived. And so I remember even when I started exploring the research, grad students warned me, they're like, you know, XYZ, they try to study bias, didn't go well. It's like, these are the bones of grad students past that shadowy place it looks like studying discrimination you're right like places the light doesn't touch for a career so i was highly discouraged from doing this sort of work because it was touching on bias and discrimination so that was part of uh one thread of being discouraged but another thread of being discouraged was like this involves AI. AI involves a lot of math. I'm like, and? Right? You know, engineering background, all of that. So the math wasn't an impediment to me, but others' perception of the type of work I was capable of, I was also seeing that come out in some of other people's perceptions. And so for me to actually pursue this research was going against all of the wisdom that was being passed down. My supervisor, he wasn't against the exploration, but he was really practical. You spent a year working on a completely different project. This is a two-year program. You want to do a new project halfway through. That might be difficult. Maybe this is a side project, right? And so I think all of the advice was well-meaning and everything they warned me about did happen. So I guess this is where it helps to be stubborn. But also, I think the other thing for me was, even though the people around me at the time didn't quite see the vision, it still felt important to me. and I have to commend the Media Lab for creating a space where I could explore a sandbox, even if they didn't understand the shapes I was building. They're like, you do you, right? We don't know what you're doing, but figure it out. And there are many spaces that I wouldn't have even been able to explore that at all. You stated that in some ways you went into computer science to escape the messiness of the multi-headed isms of racism, sexism. classism, and more. And those signs indicated otherwise, you wanted to believe that technology could be apolitical. Oh, 100%. So what changed your mind about speaking up? When I saw the reaction to the 2016 election, it was kind of a all hands on deck moment. and this questioning of what can I do to make a difference in the world. And so when I got to the Future Factory, there were two missions. One, get the PhD, family legacy, third generation. I only applied to, I didn't really want to go to grad school again. Like, you know, so I kind of did the, well, I'll apply to one place. And if it doesn't work out, I did try. So I applied to one grad school. It worked out. So this was the price for getting it. Did you really think you weren't going to get in? To the media lab? Yeah. You never know for sure. I thought I had a good chance, but it wasn't 100%. And if it didn't work out, I could continue my entrepreneurial dreams. So I was not heavily invested in getting in. In fact, I was going to be working on an entrepreneurial project until I got a call from my father. It's like, remember who you are? Oh, no. i was like road scholarship fulbright fellow daddy this has to count for something right you know you like it is not a terminal degree okay okay all right all right that is true you know so i applied to that one place and got it yeah you think you'll ever go back to that entrepreneurial project it's a whole other area of questions that i'd love to ask you about but again the time is slipping away. Yeah, I mean, I was working on a hair care technology company, a CTO of an ed tech company, the hair technology company, we were about a decade early on hair analysis. But now if you go to my Avana, we can actually do a hair analysis where you get a camera, it looks at your hair strands, it can tell you so many things about it, but also give you unique product recommendations. So I was a CTO for that before I went to Zambia. So I had a lot of side projects. Your thesis, which you titled the Gender Shade Project, demonstrates the priorities, preferences and biases of those who create code and the algorithmic bias from companies, including IBM, Amazon, Microsoft and more. With over 3,400 citations, you exposed how AI facial recognition systems had 100% accuracy for white male faces and near coin flip results for dark skinned women. Some labeled Michelle Obama as male. Others were labeled as gorillas. What were some of your other findings? Yes. So there are a whole combination of findings when it comes to the mislabeling of faces. So with the Gender Shades Project, as I started looking at how computers read faces, I was asking three different kinds of questions when it comes to facial recognition technologies. So the first question is, is there a face? This is face detection. So when I'm putting on a white mask, that's a face detection fail, right? Another kind of question I like to say you might ask is what kind of face? So what's the gender of the face? What's the age of a face? Maybe what's the emotional expression on the face? And so that's what I focused on for my master's work with gender shades, I was looking at companies guessing binary gender. And in that, we tested a number of different companies. So in the case of Microsoft, it was that there was perfection for one group, the lighter males, the pale males, affectionately called, right? And it wasn't so great for other groups like darker females. But it was also interesting because we tested like a company from China. We found that it actually had best performance on darker male faces. And this was really important because in that research, we were doing intersectional analysis, borrowing from Kimberly Crenshaw's work on discrimination, right, along multiple axes. I was like, oh, this is being applied in the legal space, but maybe there's something for computer scientists to learn. So instead of just looking in this case at gender, I also started looking at skin type as well. And so it wasn't just the story of, okay, it works better on male faces than female faces, which was the overall trend, or it works better on lighter faces than darker skin faces. But when we did that subgroup analysis, right, lighter males, lighter females, darker males, darker females, That's where we got the stark contrast you were just mentioning. And Microsoft was the good results, right? With IBM, their error got between their best performing group, lighter males, and their worst performing group, darker females, the highly melanated like myself, right, was around 34%. So those were the findings that really got me to start exploring what are other ways in which computer vision has failed, right? And so you have the Gorilla Gate example that you just brought up with Google Photos, where what I now call an evocative audit, people in the wild, regular people were interacting with these systems and seeing issues. Google fixed the problem not by making a better system, but just by removing any label of gorillas. So gorillas were also not labeled gorillas, just to be extra sane. I don't know if that's quite the solution. So we've seen an evolution of different approaches to addressing some of these misclassifications and mislabeling that comes on. But as I was also doing the work, I realized even if these systems were perfectly accurate, whether we're going from guessing the gender of a face, which that's difficult, right? How does a person identify to figuring out the unique identity, right? With facial identification. If you have perfect facial identification and cameras everywhere, we have the infrastructure for a surveillance state that tracks your every move, where you go to worship who you see at night or in the morning, whatever your preferences are, right? Where you go and protest. And so that was very interesting tension for me to hold as I was doing this research because it wasn't as easy as saying, okay, let's make more inclusive data sets. And when we have more inclusive data sets, we'll have more accurate facial recognition, but accurate systems can be abused. And so the analysis had to be not just how well does the technology work, but what kind of technologies do we want in society in the first place? Amazon dismissed your findings. How did you find the confidence to stand by your data? It was interesting because we were talking about my parents earlier, right? So we did our first research paper, Gender Shades. And when it came out, we had tested IBM, Microsoft, Face++, this company in China. Our second paper tested those companies, but also included Amazon and another company. So when the first paper came out, it was actually fairly well received, right? Like, okay, bias is an issue. We're working on it. Or some companies like we've been known and we've been working on. I was like, okay, okay. But this was the approach. So when the second paper came out, the reaction was surprising to me and to my research team, because imagine having a test where the answers have been known for a year and still failing. And you're a company as big as Amazon. And you're going for a billion dollar contract with the Pentagon, which was happening at the time when this research came out. And so we're showing their competitors that are also bidding, right, are performing better than they are on this computer vision task. And so I was surprised at the Amazon attack because the pushback I had expected from the first paper didn't come. But this is what the grad students of the past, right? You know, those, the places you shouldn't go. The bones. And the bones, right? This is what they had warned of. And oh, yeah, it came down hard. I remember I was, I was spending my birthday in Switzerland, but I was at a conference, I was at the World Economic Forum. And the paper was going to be released about a week later. And so I remember flying from Zurich to Honolulu, I'm in this big red jacket, and I call it, okay, I need some Hawaiian shorts, right? And I had sent the pre results to all of these reporters and Amazon was claiming they had never seen the paper, even though I had a, you know, I had the email evidence, maybe it hit spam. I don't know. Benefit of the doubt. Yeah, maybe. Right. So they did all of these delay tactics. And then when the paper came out, they had a corporate vice president, Dr. Matt Wood, come out and try to discredit the paper So much so that we had the research community rally around us and say, this paper is valid and important because the types of issues it addresses, we have to all be thinking about as AI becomes more entrenched in our lives. So we don't get better as a field by ignoring the problems. We have to actually confront it. And so this coming from a Turing Prize winner, you know, our Nobel Prize for computer science and a former principal AI researcher of Amazon really had a major impact. What I found interesting, though, is when the book Unmasking AI came out, it was an Amazon best books pick. after all that time and even as this was going on i separate individuals from institutions there were people inside the company sharing what they were able to in terms of the opposition that was actually inside the company about the use of facial recognition and i am happy to say that all of the U.S.-based companies we audited stopped selling facial recognition to law enforcement. Thank you. Thank you. Very important work. Let's talk about the other side of facial recognition that you mentioned and the whole sort of way in which we're now being monitored. How are you, what's your feeling on all of this? I think we have to be very careful now. I was just looking at the recent announcement with Sam Altman and Johnny Ive and thinking about what future computing devices might look like And if you look at companies like Limitless Limitless which is a pendant that always on always recording I'm thinking, where's the consent piece? So some years ago, Google experimented with Google Glass, right, where you could record and so forth. And people didn't like it. They're calling it glass holes and things like that. I was reading on LinkedIn, an experience from Allie Miller, who's this top voice in AI. And she shared how she was backstage having a conversation with somebody and said, Oh, we said so many good things. I wish we had written it down. And the person said, Oh, don't worry. I have an AI recording it right now. And so that level of violation of privacy, consent is going to only increase if we don't push back. One of the ways we've been pushing back with the Algorithmic Justice League is our Freedom Flyers campaign. So you have the TSA has been putting facial recognition at the checkpoints in airports. They have plans to expand it to 400 airports. And you actually have the right to opt out. But most people don't know. And it's not surprising because you go there and they say, step up to the camera. There's no signage that says you have the right to opt out. It's usually very difficult to see. So we've been doing surveys to see if people are actually finding the signs. And oftentimes people don't even see the signs. Last year, we talked to the Department of Homeland Security. We showed our data. And so they committed to the spring actually adding language on the kiosk to tell people to opt out, which is a good step. But we really have to think about the power dynamics. You're in line. I don't know if you're early or late to the airport, but usually I have time pressure. You might not have time pressure, but I have time pressure, right? There's social pressure, people behind you in the line. There's financial pressure. I don't know how much your tickets cost, but the prices seem to keep going up, right? So you're doing all of that. You finally made it to the airport. Now they got the scan thing going on. And a TSA officer tells you to step up to the camera. They don't tell you have the right to opt out. You've seen everybody else in front of you step up. The signs are there. They're usually turned around in a different language. We've documented all the things. They managed to put the part that says you opt out right where the pullout piece goes. It might have just been a coincidence. You know, we've been documenting some of these things. Right. So it's not designed so you're actually aware. And one thing they know how to do at airports is design signs. Right. No weapons formed against TSA shall prosper. It's a close to $15,000 sign. Real ID. You need to have that. The signs they want you to see TSA is hiring. You will be able to see those. So, you know, there's all of that going on. And so the big thing we've been doing is, one, letting people know they have the right to opt out because we're getting the story that, well, no one refuses it. They don't know they can. But the other thing we've been doing is also collecting travelers' experiences. And some of them have been really disheartening. I remember reading one about a son, the son of a man, and he had autism and his son was scanned against his will. You know, I told my parents to opt out because of the work I do. And I remember them being humiliated in line. Right. And so when I sat down with Secretary Mayorkas at the time, I shared that story. but the stories of many other people who talked about intimidation tactics or just being dismissed as like, well, we have cameras all over the airport anyways. I just came back from Greece and this was the very experience I had when I was crossing over when I said, oh, I want to opt out. Smug look, they're like, we have cameras everywhere. This doesn't matter. But this actually does matter. And why it matters is if we don't opt out and if we don't resist, then that narrative persists that people want this, which they'll use. And they claim right now that they only put the facial recognition on some of these checkpoints and that they don't have it on the overall system. But all of this is just one line of code from being mass surveillance if we don't actually resist. And so that's why it's important, even if you haven't opted out in the past, that you continue to do it in the future. And so then going back to that situation where you're backstage and someone has their AI device, right? If we want a consent culture, we have to practice a consent culture and also demand a consent culture. Because what we will see if that doesn't happen is listening devices everywhere, which we are already set up to do, right, with our cell phones, but even more ubiquitous and even more intimate. Aside from not providing consent, what else can we do to combat mass surveillance in our society? Yeah, I think this is where there's individual action and collective action that's needed as well. At the end of the day, it does come down to laws, right? And so seeing, for example, the EU AI Act, where they have a restriction on the use of live facial recognition shows that there's alternative ways. Now, it does depend on the administration. What we saw with facial recognition, we still don't have federal laws around it, but we were able in 2019, 2020 to get quite a bit of traction at the municipal level and also at the state level. What's going on in Congress right now and now at the Senate is there's a push to say any kind of regulation of AI at the state level is going to be on pause for a decade. That's huge. That's saying the protections that people fought for because at the federal level is a bit harder, but at the state level we can get traction. All of that gets rolled away. So that's what we should be putting our eyes on right now. Given the current administration, what are your hopes and fears about what is possible from a collective public response? I think my fears right now, especially with the gutting of people in the administration or people at the IRS, people at different federal agencies, is this assumption that humans are so easily replaced. So I think about this is why stories matter so much. It's not even the power of AI, but the power of the stories we tell about AI. So I think about ANETA, the National Eating Disorder Association. They bought into the hype, right? AI is going to be better than humans. It can do call center work, etc. And at the time they had their workers were trying to unionize, you know, for the reasons you do better pay, better conditions, all of that. They decided to fire all the workers and replace them with AI. So you had a headline, right? Net of fire staff replaces with AI. I don't think it was even a week before you had the next headline, which was the chat bot that the people were replaced with had to be shut down. And that chatbot was actually giving people with eating disorders advice that's known to make eating disorders worse. And so here we're looking at the actions of FOMO, fear of missing out. We have to adopt these AI solutions and also how easy it is to dismiss the work you're not proximate to, to think it's so easily automated. So now coming back to the government, right? We're seeing this clear out of humans. and then we're going to see this rollout of AI systems as a replacement to the humans. And then we're going to find out, oh, we did need humans after all, right? And we're going to be backtracked. Surprise, surprise. Tell us about what you're currently doing because you are on the front lines of helping society manage all of these issues and protecting society from the issues. Talk about what you're currently doing. I'm going to ask you one last question and then I'm going to have to close out. Oh, it's been so much fun. I know. OK, so what I'm currently doing is going global. I mean, not that the U.S. isn't a great place to be right now. Sorry, allergies. So last year, Oxford University, where I did the Rhodes Scholarship and did a master's in learning and technology and didn't do that master's in global health. So they reached you. Masters is enough, right? Not for my day. You don't have me out here, out here working. So they reached out and they started a new institute focused on ethics and AI and they had a fellows program. So they asked if I'd be an inaugural fellow. I was like, oh, what does it entail? Well, tell us what you want to do when you want to do it and what resources you need. Is that a blank check? Sound like a blank check. I'll take I'll take the blank check. And so long story short, what ended up happening is I proposed doing a five year anniversary world tour of the documentary Coded Bias. Coded Bias premiered at Sundance in 2020. And when I watched it in 2025, it was as relevant as ever. We're talking about issues of bias and discrimination. AI still persistent and people see it more now. Right. Because if you are applying for a job, you know, you might not get it. If you're applying for a loan, all of these areas now, AI is becoming a part of the decision making system. If you're applying for college, if you get the right medical diagnosis, if the transcript of what happened when you were talking to your doctor is even accurate, all of this is now being mediated by AI systems. and not to even talk about, I mean, there's so much to talk about regarding the current potential situation of the administration looking at social media to allow you into the country or not. Absolutely. And you've already had issues in the past, right? Where context collapse happens. So someone might say, she's the bomb. Was that bomb? Was that a bomb threat, right? Like this is real. And that is, there are two parts, right? One when it's mistaken and one when it's intentional as well. We've talked so much about your science, your innovations, the way in which you're working to protect society. We haven't talked very much about your poetry. And for those that are interested in seeing more about Dr. Joy, you can go to poetofcode.com. Before we close out the show, you have some really beautiful poetry in your book, Unmasking AI. You've done extraordinary performance art. I'm wondering if you can share one poem from your book with our audience today. Only one? We don't have time for more? I'm up if you are. I don't know about everybody else in the production team. Okay, I'll do two if they permit me because I want to end on a happy note. But before I end on the happy note, that's the note I think we should all hear at the moment. So this poem is called Unstable Desire. It's in the epilogue where I talk about my roundtable discussion with President Biden. Unstable desire. Prompted to competition, where be the guardrails now? Threat in sight will might make right. Hallucinations taken as prophecy. Destabilized on a Midland journey to outpace, to open chase, to claim supremacy, to reign indefinitely. Haste and paste control altering deletion. Unstable desire remains undefeated. The fate of AI still uncompleted. Responding with fear, responsible AI beware. Prophets do snare, people still dare. To believe our humanity is more than neural nets and transformations of collected muses. More than data and errata. More than transactional diffusions. Are we not transcendent beings bound in transient forms? Can this power be guided with care, augmenting delight alongside economic destitution? Temporary band-aids cannot hold the wind when the task ahead is to transform the atmosphere of innovation. The android dreams entice the nightmare schemes of vice. Poet of Code, Certified, Human Made. Thank you. We said you're going to read another one. So this is one I wrote in the chapter called Intrepid Poet. And this is when I start flexing the storytelling muscles, because for so long, I felt that if I let my artistry out, my research might not be taken as seriously. And it was such a relief to find that it was moving from performance metrics to performance arts that really moved this message around the world. So to the Brooklyn tenants and the X-coded, resisting and revealing the lie that we must accept the surrender of our faces, the harvesting of our data, the plunder of our traces. We celebrate your courage. No silence, no consent. You show the path to algorithmic justice requires a league, a sisterhood, a neighborhood, a podcast, hallway gatherings, sharpies and posters, coalitions, petitions, testimonies, letters, research and potlucks, dancing and music. everyone playing a role to orchestrate change. To the Brooklyn tenants and freedom fighters around the world, persisting and prevailing against algorithms of oppression, automating inequality through weapons of math destruction. We stand with you in gratitude. You demonstrate the people have a voice and a choice. When defiant melodies harmonize to elevate human life, dignity and rights, the victory is ours. Dr. Joy Bolamwini, thank you so much for making so much work that matters. And thank you for joining me for this very special live episode of Design Matters at the WBUR Festival. To know more about Dr. Joy, you can read her book Unmasking AI and see more of what she's doing on her website PoetofCode.com. This is the 20th year we've been podcasting Design Matters and I'd like to thank you all for listening. And remember, we can talk about making a difference, we can make a difference, or we can do both. I'm Debbie Millman and I look forward to talking with you again soon. Design Matters is produced by the TED Audio Collective by Curtis Fox Productions. The interviews are usually recorded at the Masters in Branding Program at the School of Visual Arts in New York City. the first and longest running branding program in the world. The editor-in-chief of Design Matters Media is Emily Weiland.