Building AI Boston

The Ethical AI Puzzle with guest Cansu Canca on Building AI Boston

38 min
Mar 11, 2026about 1 month ago
Listen to Episode
Summary

Cansu Canca, Director of Responsible AI Practice at Northeastern University, discusses the intersection of philosophy, ethics, and AI development. She emphasizes that ethical AI isn't about adding moral constraints after development, but about building better, more functional systems from the ground up that serve societal needs while preserving human autonomy.

Insights
  • Ethical AI development often results in better-performing systems, not just more moral ones - addressing bias improves accuracy and customer reach
  • There is no value-neutral AI system - if organizations don't make conscious ethical decisions, someone else's values are embedded by default
  • Universities have a unique opportunity to lead AI governance due to their public trust mandate and access to interdisciplinary expertise
  • The speed of AI advancement requires iterative, puzzle-solving approaches to ethics rather than static policies or committees
  • Philosophy and humanities become more critical as AI advances, providing frameworks for understanding how society should use these tools
Trends
Integration of ethics into AI development processes rather than post-hoc policy creationUniversities repositioning as leaders in AI governance and responsible developmentShift from viewing AI ethics as constraint to viewing it as system optimizationGrowing recognition that AI governance requires interdisciplinary collaborationEmphasis on iterative, adaptive approaches to AI ethics rather than fixed principlesIncreasing importance of humanities and social sciences in AI developmentFocus on preserving human autonomy and decision-making in AI system designRecognition that AI adoption requires organizational and behavioral adaptationGrowing awareness of bias in historical datasets affecting AI system performanceInternational collaboration on AI governance standards across organizations like UN and WHO
Companies
Northeastern University
Canca's institution, highlighted for its experiential AI institute and global approach to AI governance
ChatGPT
Referenced as example of generative AI that prompted widespread discussion about AI in education
People
Cansu Canca
Director of Responsible AI Practice at Northeastern, founder of AI Ethics Lab, works with UN and WHO
Quotes
"If you are not deciding, somebody else is deciding. There is no option where there is no value judgment being made."
Cansu Canca
"A lot of the times doing ethical AI ethics or responsible AI, AI safety really gives you better technology."
Cansu Canca
"We don't want AI systems to come in, take away, strip us from our agency, manipulate us, be unfair to us and harm us."
Cansu Canca
"Universities are really unique. We have the public trust for educating the youth, preparing them to life, but also doing research that serves society."
Cansu Canca
Full Transcript
3 Speakers
Speaker A

Foreign. Welcome to Building AI Boston. Today, our esteemed guest is Jansu Janja. She's the director of Responsible AI Practice and an associate professor of philosophy at Northeastern University. Today we're going to be talking about all things ethical AI and why academia matters more than ever. For this founder and director of the AI Ethics Lab, welcome to the show, John Sue.

0:00

Speaker B

Thank you so much, Anna. Thank you.

0:33

Speaker A

Oh my goodness. Even your background is like, mind blowing. I mean, I can't wait to talk about this. Cara. I don't think we've really broken down ethics with an expert like John Sue.

0:35

Speaker C

Yeah, yeah, that's right. And you know, I imagine we're not going to get through it all today either, because it is a complicated subject. Right.

0:48

Speaker A

Well, thank you for being with us. And you know, I, I'm honored because you have been recognized among the 30 influential women advancing AI in Boston alone and 100 brilliant women in AI ethics. Is it true that you even work with the UN and many other privileges, my friend.

0:57

Speaker B

Yes, I, I worked with un, Interpol, World Economic Forum, World Health Organization. I mean, basically all the. Well, I try to help whoever I can in terms of putting ethics into practice in the bigger, you know, bigger, bigger platform.

1:15

Speaker A

That's a big, big platform. I'm going to say this, that before your work in technology were on the full time faculty at the University of Hong Kong Medical School and an ethics researcher at the Harvard Law School. Cara, I'm kind of starstruck.

1:34

Speaker B

Are you? Oh, yeah.

1:48

Speaker C

And you know, it's, it's, it's actually pretty cool when you think about like health and then law. Like, those are two really big areas where the ethics are pretty critical to the functioning of those disciplines.

1:50

Speaker B

Obviously, I think for, for, for your audience who may think like, well, how does, why does she do AI ethics now after, you know, being in medical school and being in law, I think exactly that intersection. I am, I'm an ethicist, I'm a philosopher by training. So I am interested in questions that are high stakes and asks the question, what is the right thing to do? And this critical core question is there whenever you engage with ethics and once you start talking about AI, it becomes yet another critical question. And the same training that allows me to do ethics in public health allows me to do ethics and AI. Of course, with the additional training around the domain,

2:01

Speaker A

this is great to unpack this because I think, I think the big question or the big concern on everyone's mind is, okay, I'm a graduate student and now what is my job really Truly being taken over by AI. We're not going to get into the fear mongering, but I just want to ask you how is your career in academia informed what you believe in your core about the subject of ethics? Why, why do you stay there where you are at Northeastern?

2:49

Speaker B

Yeah, I think the main thing is. So I'm going to again emphasize the philosophy part, right, Because I think a lot of these questions that we are asking, these questions that have should, what should AI systems do? What should society do as they adopt AI system? What should policymakers do? These should questions are really philosophical questions. And I don't mean this in the sense that let's sit down and in abstract think about it. I mean it as we use the theoretical and structural thinking that philosophy brings in and all the, you know, two millennia of work, body of work that philosophy brings in in order to figure out our path forward. So the questions like the, the, the, the way to think about this, it's almost like the, the math behind it, right? Like the, the philosophy behind it is very struct within academia allows you to have access to experts who are thinking about these questions without, like in an impartial manner by, by being objective, not for company's sake, not in order to make profit for a particular product or a particular approach, but really looking at it from computer science perspective, looking at it from design perspective, looking at it from philosophy perspective or sociology perspective, politics, that is really the, you know, the benefit of being embedded in academia.

3:13

Speaker A

No, we understand that this integrated approach is part of your initiatives as the director of AI Ethics Lab. Is that true?

4:40

Speaker B

Yeah, from the outset. So I started AI Ethics Lab. AI Ethics Lab is my own initiative. Let's say I started in 2017 and since 2017 and then northeastern wanted, well requested a similar function as they were building this institute for experiential AI. So the idea of AI Ethics app was coming from my work in ethics and health actually, because the question always has been how can we not water down the ethical questions, really be serious about it, but also don't take years and years to think about them because the questions have to be resolved right now, right here, when a developer is building an AI system or when a deployer is about to deploy an AI system. So you don't have like this leisurely time frame. You have to sit down and make decisions. And this is something that anyone who has worked in public health, for example, would know. You know, like yes, we would love to have the infinite time to come up with the best models possible, but if there's an epidemic if there's a pandemic, you got to act with, with your best intentions, with your best knowledge and with the best state of art that you can possibly use. So same mentality that created AI Ethics Lab back in 2017, which still is going on, and the same mentality for responsive practice then I created at Northeastern University. And Northeastern was the perfect setting for this because of Northeastern's the whole mentality of having rigorous academic theoretical thinking directly connect to industry and practitioners. So that intersection is super critical for the work that I do.

4:48

Speaker C

Right. And when you think about public health, we had a great conversation recently with another guest about the analogies. And you know, it's these large systemic sort of way of thinking, right. About big change rather than just individual sort of. It's like the idea of precision medicine versus public health, right. There's kind of like this individual versus the group. And there's a theory in that, you know. Well, and I think our audience might be interested in hearing about it is the concept of harm reduction.

6:32

Speaker A

Right.

7:03

Speaker C

And like harm reduction in public health is, you know, some intervention is better than none in some cases. So I don't know when you think about ethical AI, and first I'll even back up and ask you, like, what does it mean? What, what does ethical AI actually mean when we say that? And then how do you think of applying that at a large system wide global perspective? So an easy, an easy question.

7:04

Speaker B

You're welcome.

7:27

Speaker C

Would you please? Yeah.

7:28

Speaker B

So let me start with like accepting one of our issues, which is we cannot settle on the perfect naming of this thing that we do so is not quite right because we are not trying to create like human, like AI that is now acting like a moral agent. But what we are really. But, but you know, we don't have a better shortcut, right? Like responsible AI is also not like AI having responsibility. Trustworthy AI is not. We trust AI. So there's always this like conceptual problem that we run into. But ultimately what we are trying to say is the following. We want to create AI systems. We want to make sure that we create AI systems that are beneficial for the society, that are in alignment with our general societal expectations and our legal frameworks. And also, you know, within this, like, what does it mean that in alignment with ethics? Well, it means that we want to, you know, there's like, when you think about the, again, going back to philosophy, like moral philosophy, there, moral and philosophy, there are really like some core ideas and that has never changed. Like those are the core ideas we Want to make sure that AI systems that we create are still allowing humans to have autonomy in the sense that I don't want my agency to be taken away from me, I don't want my decision making to be taken away from me. I can delegate. That's a different thing. I can choose to make an AI system, make some decisions for me, but I don't want to be unable. I don't want to be manipulated all the time. I want to be able to make decisions for my own life, for my own body, for my, you know, for myself. So that's our, you know, respect, autonomy, human autonomy, sure.

7:30

Speaker A

And what you're really talking about is governance, which is a nice umbrella term for.

9:12

Speaker B

Yeah, exactly. So this is the autonomy part. The other one is what you just mentioned, Kara. Basically the harm reduction. We want to make sure that everything that they build is reducing the. As we are vetting them, we reduce the harm, we increase the benefits. This is the same when you are doing public health analysis. It's the same when you're doing AI analysis in this sense. And the other thing, other major thing is fairness. We want to create systems that are as fair as possible. And of course, the definition of fairness varies according to the question that we are dealing with, according to the domain that we are dealing with. But the thing is, we have the tools to think about these things. We know that there are different definitions of fairness. We have literature, we have work that's done on which one is the most appropriate for healthcare versus for finance versus for insurance versus for education, and then operationalize it. So basically, we don't want AI systems to come in, take away, strip us from our agency, manipulate us, be unfair to us and harm us. That is, that is the thing that we are trying to prevent. That's, that's basically idea of responsible AI, ethical AI, trustworthy AI, whichever word you want to use.

9:15

Speaker A

Nothing too big there.

10:25

Speaker B

That's

10:27

Speaker A

a core issue. I believe this is why people that are educated and people like yourself, I think that I notice a trend with our guests is that what we're really up against is the speed at which AI is doubling. And I've read articles that suggest that if you are thinking AI did a pretty bad job of whatever it is that you're putting it into your job last year, it's going to do it perfectly in six to four to six months now. I mean, is that really what you're up against when it comes to, you know, getting this conversation out there and, and having people understand, hey, if we're not deciding this democratically, somebody else is. And they're thinking in a silo. Is that why you have sort of the pressure to get people's understanding up about what AI ethics can do or why you're doing what you're doing?

10:30

Speaker B

Absolutely. I can just say yes and shut up. You see. No, seriously, like this really a great way of putting it because the, the, we, me and my colleagues, my team, my, my colleagues at Northeastern, we are working on AI because we find this technology fascinating. And when, when we say AI, by the way, we are not just talking about large language models and generative AI, we mean like I and some of my colleagues working on for the last 10 years and some of my colleagues have been working on 20 years and more than that. Right. So we are talking about really the, the AI that has been, the, that's been like changing and becoming different, better in some sense, not so much better in other sense. But we are not just talking about your regular chatgpt or whatever and exactly what you said, Anna. The systems are evolving quite fast, which is again, fascinating and it's so exciting. At the same time, we need to make sure that our approaches to safeguard these systems, design them well, make them well. And oftentimes, you know, I want to make sure that this is, this really is understood. Like when we say ethical AI or when we say putting ethics into AI, oftentimes this just means that we are making the AI system better. You know, it captures, you know, it is more accurate, it is more reliable, it doesn't unjustifiably discriminate against the guru. What does it mean to unjustify, be discriminated? Well, it was supposed to, for example, pick up your resume because you were a great fit and it just didn't because you're a woman. Well, that is a loss, that is a system that is just malfunctioning. It is not functioning well. Right. So forget about this. Oh, do we want like. I don't want to have the conversation to always go towards like. But what does morality mean? But much more like, well, what does a good technology mean? What does a well functioning technology mean? And a lot of the times doing ethical AI ethics or responsible AI, AI safety really gives you better technology. So that also means that you don't just innovate, innovate, innovate and then try to tie a little ribbon around it and call it responsible. You have to innovate responsibly. You have to design it in such a way that the correct optimizations, variables are integrated, the correct data set is used, the appropriate model is employed or in deployment. You want to make sure that the right type of AI systems is used for the right type of purpose. So these are not like your one page policy that comes all the way after the fact. But this is really about how to develop, how to design and how to implement.

11:19

Speaker C

Oh, sorry, Anna. The idea that it doesn't have to be a zero sum game, you know that if you're building ethically versus building, you know, well, like they're this, they can be together. And I think that's, we've talked about that in other shows and just how important it is that the, the business side of this equation understands that. So maybe dig into that just a little more.

14:12

Speaker B

Yeah, absolutely. I mean there are multiple different ways of thinking about this, right? Like for example, if you think about. Let me just go back for a second to the unfair discrimination case, right? So if you are trying to acquire a, if you are, if you want to implement a, as a bank, a loan, an AI system to determine who should you give what level, what type of loans, what you are trying to really do make. Like your goal is to capture all the customers that are qualified for this law. All the customers, right? And a lot of the times what happens with a not so well done systems is that because of historical biases against racial and gender biases, for example, in your existing data set or because of the fact that there are so many groups that don't even have their access to bank systems, which means that they are not even in your historical data set. What ends up doing when you don't look at your, when you don't look at the fairness of your model or of your AI system, what ends up happening is that you are missing out on customers. You are missing out on great potential customers. So this is a good example and a lot of the times this is the type of example that we are seeing over and over again. You're just, you're, you want an AI system and that AI system is not really serving the function that you wanted it for, which is capture all the customers that would be good fit for this long. And same goes for the hiring, you know, for the, or for ranking your, your resumes or education application like the student applications, like all of those. Like you are trying to capture the best fits and if the best fit is discriminated against, then the system is not doing its job. Or you know, like from back in the day. One of our classical examples is like facial recognition technologies. And if the facial recognition technologies keep failing on women and people of color. Well, that just means that the system does not function well. So as a, let's say police agency that you buy this technology, it keeps failing you. This is not like, again, this is a business case or if you are using, you know, large language models, you have confidential information, you don't know how to use the language language model, you put all the confidential information. Well, you have a problem because that is not what you wanted. It goes against your business interest. Now you have lost some of your confidential data, like all of those type of things. There are really like limited examples where we have to go head to head with a business saying, well, we see this is for your business interest, but it is not it. This is. And yes, by the way, I mean, it is.

14:32

Speaker A

Well, yeah, I think it's hitting headlines harder. I think when people understand, you know, you can ostrich yourself and think it's only, you know, this company or this company when they realize the landscape around them is shifting so fast. These questions, I'm, I'm afraid that if people don't jump into this conversation and at least, you know, find your place in it. That's why we exist, by the way, so that we can really bring people into the mix and understand. Yeah, the things that people are talking about now, the people like John sue, the people that she's, you know, assisting and Interpol is a really interesting example. It's like on my mind when you talk about facial recognition. I mean people are going to want to know that these things are being decided and how they're, you know, their private information is being used, how they're being assessed for a loan. I mean it's, it's mind boggling. And yet we're trying to dial down the fear because people like you are in charge still. People like you are a part of the conversation. I want to ask you like a really granular question because it came up in your resume. You've implemented something called the problem solving in ethics, the PI model. Can you talk about what that is?

17:20

Speaker B

Yeah. Before I get into, get into that though, and I just want to point to something that you also said that in, in a little bit earlier, but you said that if you're not deciding, somebody is deciding. I think that is so important to underline. Like there is no option where there is no value judgment being made. Like there is constantly a sort of value judgment being integrated into the AI systems. They are designed with a value judgment in place. Like they are optimized for engagement or they are optimized for having, you know, having your making sure that they have better accuracy or they making sure that they have, they protect your privacy. So these are decisions, these are trade offs that you don't have a, you know, you don't have the option of like let's just, let's just not do this. Let's just do math. Well, your math makes this. Because your variables are, you know, like math is just not number, it's not just numbers. The numbers are attached to purposes the data. The, you know that there is no such thing as like completely objective value free system. So I didn't want to lose that part because that you said it so correctly that there is no option of that. Nobody is deciding. If you are not deciding, somebody else is certainly deciding. Exciting.

18:25

Speaker A

Well, yeah, and, and I'll, I'll let you get to the pie question because I literally want to know. This is not going to be our last conversation. I'm just telling you we could do this all day, right Cara?

19:42

Speaker C

Oh yeah, I would love to.

19:52

Speaker A

I would love to.

19:54

Speaker B

This was. The PI model is something that I developed back in 2018, 2019. And the idea there was, this was still earlier times when you know, back in 2017, 18 by the way, there was so little conversation about AI governance and AI ethics. There was just some, I mean I don't want to overlook some great work individuals have been doing around the world for like, for such a long time. But in terms of like the discourse I remember, you know, I had to, when I first reached out to companies, I had to say, I would like to discuss AI ethics with you. Here's what it means and here's. But there was no such like conversation like a well known established concept out there. So the idea of PI model puzzle solving in ethics was trying to really push this idea that we are not, I'm not talking about let's have another committee, let's have another one page policy, let's have three principles and call it a day. What we are really doing with ethics is as exciting and ever changing and progressive innovation is. Ethical questions are like that too because they are connected to these innovations. A lot of the times it's not like you come to me, I know the answers and I'll tell you what to do. That is also the reason when people very rarely at this point, sometimes they will come and say what do you mean responsible? Am I not a responsible researcher? Why do I need to talk to you? It's like, well, it's not personal, it's not about you. The problem is we don't often know what is the right direction to take because these are complicated questions like how, which data set, how do you, which, which models to choose? How do you decide on what type of fairness? Are you going to try to equalize the outcome? Are you trying, are you going to equalize the treatment of individuals? Like what. These are questions that are, that don't have like the rule of the thumb type of answers. So it is like a puzzle and I always loved philosophy and ethics is because it is like a puzzle. We are constantly solving these high stakes questions with extreme time pressure. Have to, because again, going back to what you said, somebody has to and yes, you will be sometimes wrong, but you will be wrong with the best intentions and the best information out there and somebody will correct you because you're going to publish your results, hopefully or talk about your approach and then you are going to do it better. You're going to iterate again. So in just as an innovation, I would argue, and I say this always in our work as well also in governance and ethics, we iterate. You know, you do your best, you put it out there, you watch it, you monitor it and if things are not going as well, you iterate again. That is very, very different to what still most companies are doing, which is we have a five people governance team, that committee that gets together every two months and talks about what we want to do. And here's our one pager policy that basically says be good.

19:55

Speaker A

Nope, that's not going to work. What I love about Boston though is that to me the world comes to Boston to be educated. I love to hear news stories where I see that, for example, in health care I heard some policy about, you know, patient information. Let's just say I like the fact that in certain countries and hopefully our own, we're understanding the level of care when we're sharing information responsibly. I just love the fact that AI is this great leveler and I know that this philosophy exists in Boston because that's where we built our show, right? Gara, we're both not native to Boston, but we both enjoy the fact that it's an international kind of hub. I mean, I'm sure that in your capacity you're working with big organizations like the WHO and the UN and you're, and you're from Turkey, is that right?

23:02

Speaker B

Istanbul, yes, Turkey.

23:50

Speaker A

I love this international community and I really particularly love what's going on in Boston. But to bring it agree with you,

23:52

Speaker B

by the way, for anyone who's doing intellectual Work. I think Boston is kind of like a magical Disneyland because you have everyone who knows like who, who has the best information, most up to date information about everything within Viking distance.

23:59

Speaker C

We do right here.

24:14

Speaker A

Yeah, crazy. But I know there's north eastern campuses everywhere. I mean that's another that I, I enjoy the fact that you're global and, and you know that, that you have so many campuses, but Boston is just this playground. I want to say something about your president. This is a quote from an op ed piece that I read. If colleges and universities can see the AI revolution as an opportunity instead of a threat, we may be able to reassert our relevance at the very, at the very important. Much of society is questioning our value. This is, this is the moment for academia to step up. And, and do you have any comments on that?

24:16

Speaker B

I mean, I, I cannot agree more. This is so correct because I mean we, we've been through this and we are still somewhat through this. Right? Like going through this. As soon as AI became a thing meaning ChatGPT came out, basically the immediate reaction from education, like all of education, not just higher education, but from the education setting was that, oh, they are going to cheat now. And it's like, come on, think bigger. Yes, of course they are going to. Students are going to use it, by the way, so should you. But the question is, how can we best do it? How can we best use these systems? And this comes up over and over again. How do you design your courses? What type of courses are you putting out? How are you integrating AI into courses? But very, very importantly, how are you integrating the things that we are talking about, the philosophy around it, you know, the social aspects around it. Humanities and social sciences I don't think have been ever this important because AI is getting better and better, but it requires us to be able to understand how we want to use it. Like it's a tool. Like, unless we are going to say, go ahead AI run our world, we are just going to be your little pawns. Unless that's our goal. We better know, we better learn from historical changes. We better understand the philosophical thinking behind it. We better understand how society is impacted by it. And because this is a relate, I mean in the university, university, I always think of this as like a small version of the, like the world, right? Like you have all these different types of people. You have the faculty, you have the researchers, you have the teachers, you have the staff admins, you have the, those who keep the university going, you know, the maintenance and everything. You have students, you have all these different people with different goals, working together. And AI is not just, you know, you take AI, you use it. No, no, like as you are using the, using AI systems, you adapt your behavior. Now as a student, you adapt your learning structure. Now as a professor, you should be adapting your teaching structure, your courses, the way that you do exams. Of course you should. Like a very mundane way. This comes up, for example, among many, many different ways. Many universities have online courses and in online courses you want to make sure that students don't cheat. And now another use of AI, not just ChatGPT. Right. Another use of AI is, well, are we gonna. There are proctoring AI systems that you ask students to install in their, you know, computers so that when they are taking the course, they are, it's. The system makes sure that they are just taking the. Sorry, taking the exam. They are just taking the exam. They are not switching tabs. They are, they don't have anybody else around. They don't look away. But you also understand there are like so many problems with everything that I just said. They don't look away. I, I can, I mean, just look at my, look at the video that we shot up until now. How many times I look away thinking, no, I'm not

24:53

Speaker C

right.

28:08

Speaker B

You mean that you are going to check my tabs? Like, does that mean that you have full access to my computer? Does this mean that you basically have like a spyware in my. What. What does this mean? Like, so many different uses of AI is relevant to higher education and one way to approach it is like, well, we need these systems because we want to catch the cheaters. The other way to think about it is like, let's think about how we should. Like, what is the new way of teaching? How do we adopt ourselves? In the end, the goal is to make sure that we prepare our students to the world and for them to be not just working, but happy in the world, like functioning in the world. So how do we do this? That's really the big question. And we can start from scratch if needed. It's not supposed to be minor. Minor changes.

28:09

Speaker A

Yeah.

28:56

Speaker C

And I have to give a shout out to the Northeastern students because I go to a lot of events around town, a lot of AI events, and I see so many of your students out in the world and I know that's certainly part of the Northeastern way with co op and everything and being part of out there and communicating, but they're just amazing. And I see them really working very hard to make connections in person.

28:56

Speaker B

Yes.

29:22

Speaker C

Which is very cool.

29:22

Speaker B

And same for me, Kara. Like whenever I talk to that industry representative, a company, a CEO, they always mention, oh, we had co ops from Northeastern. They were amazing. And every time a little heart that pops up.

29:24

Speaker A

Exactly. I've got to point to the fact that you have a policy that supports that, that you understand that your students are native when it comes to AI. They're bringing a part of this puzzle and that you, you, you kind of mix it up a little bit. You're interested in that intersection between what jobs are actually there and what can teachers do to help students. I mean, it's a, it's very clear that you've got this system down and that it's different. So I, I can see that even as an outsider.

29:42

Speaker B

Absolutely. And I think, you know, the questions, I think it's very fitting for Northeastern to think about, think and talk about. I mean, I'm, I'm grateful that our president is so upfront about these absolutely critical questions, existential questions for really higher education. Because, you know, we are the type of university that is really at sort of like the, we have so many moving parts that are important. Right. Like Kara, you mentioned, we are a global university. So when we think about AI, we think about not just with the US regulations and thinking, but also the EU and Canada. When we, you know, we have, we are a research university. We have so many different departments. You know, we are not just focused on technical or not just focused on humanity, on humanities, but we, we not only have all these different departments, we are highly encouraged to collaborate between departments. So all of these really also mean. And we have online courses. So all of these really mean that we are, we don't have a, we cannot escape thinking these questions. And the thing is, we don't try to, we really go, okay, this is fantastic. Let's like an opportunity, let's take this opportunity and let's talk about how to think about AI in higher education and in my case, how to think about AI governance in high higher education. Because we are building AI systems, we are buying AI systems, we are deploying AI systems, we are using them for teaching and learning, we are using them for research. There is not like you don't get more than, more variety than this. And you have to have, you have to have like really robust governance to make sure that we do everything correctly. And we are really looking out for the, you know, the best interest of our community.

30:10

Speaker C

Right. And higher education has such a special role in other, I mean, obviously training the next generation, advancing science, all of the things research, you Know, and I love the idea that AI may save the humanities, but we'll talk about that separately, but right back for the liberal arts degree. But they exert a different pressure on the system too, which is super important because it's that whole point of who can do that. Leaning into not worrying about profit first, but worrying about societal impact first and things of that nature. And you know, the universities are so needed, you know, in that sort of trifecta of government, higher education and then business. And it's such an important piece. And I think people who worry about higher education or think it's, you know, too expensive or other things are missing in some ways I think the role that it plays pressuring the system to do better.

31:50

Speaker B

Right. And I would, I would, I mean, I would second this and like with a lot of enthusiasm. Because I think what we need to think about is universities are really unique. Like, we have the, the reason why we exist is we have the public trust for educating the youth, preparing them to life, but also doing research that is for that serves society, that is that, that's why we exist, exist, right. So we have a very different structure that also tells us, well, not like ethics is. Not like ethics is hard sell, you know, like, no matter how much I give you this ROI conversation, you know, but it's for your business benefit and so on among so many different things that businesses will implement. Ethics doesn't necessarily come come first.

32:43

Speaker A

Right.

33:30

Speaker B

But there's a major difference with universities, because universities in higher education, first of all, you have all the experts that you need. You have everyone in house, you have this fantastic circumstances that, yeah, you want to figure out how to create the best organizational process. You have your business goal, you want to figure out how to deal with private data. While you have your computer science, you have all your experts in house, which is fantastic. In addition to that, you also are in this, this particularly special position where you, you can be impartial. Again, as I said, right, like we don't have products, we have a service which is that towards our students and towards the society. We don't have products that we need to make sure you buy that product, make sure that see it 500 times a day. Eventually your brain says buy that product. We don't have to care about that. That is not what we do. So we are in this extremely unique position where we can, we can lead AI governance, responsible AI, ethical AI by example. But I would go more than that. We have to. What does it mean that we don't lead in this area? How can we not lead in this area where we exist because of public trust?

33:30

Speaker C

Right. And that could be said for Boston, too.

34:46

Speaker B

Yeah,

34:49

Speaker A

I would say that's exactly why I made a beeline there. That's exactly why our show exists. And I want to bring you back for more conversations. This has been really amazing. You have made me so proud of my liberal arts education. Made me feel like a. Well, I'm gonna say it, badass in the world because I understand the human element right now. And it's because of these kinds of conversations that I feel like people can really start to grow in hopes. And I want to just encourage anybody to follow you. We'll put links in your bio and show notes. This is a very important time. There's a lot coming up for you particularly. Is there any shout out you want to give? I know you just finished up some research work and you're launching strong in the month of March. Is there anything you want to shout out? Go for it.

34:51

Speaker B

Thank you so much. Nothing specific. I would just say what you said. Just follow our work. And this is, again, going back to, you know, higher education and in some sense also Boston. I would say reach out, because I don't think that there are many other teams that have the experience that we have both in higher education, in research and in implementation in the industry setting. Reach out. I mean, we exist because. I mean, we, as the responsible AI practice and AI ethics app, you know, we exist because we want to make sure that this actually happens. We don't want to just talk about it. We don't want to just, you know, have our papers published and so on. But, like, it has to happen. Like, we have to implement. That's why I want to. I work with, you know, the intergovernmental organizations. That's why I work with these groups and universities and industry partners. Because the ultimate goal is to make sure that whatever we are talking, whatever we are thinking, whatever we are researching eventually goes into somebody's production. And it can be as, you know, the. The image could be as cool as. Well, we are stopping from Terminators or as mundane, which I think, but super important. Your insurance, you know, treats you well. When you get sick, actually, you'll get coverage, which is. I'm sorry, but super important. It is not. And anyone who had to go through it would tell you so. So we work across domains, we work with all practitioners, and I think, you know, we are. We are more than happy to partner with universities, other universities, other researchers, industry partners. Just. Just follow us, get in touch with us.

35:43

Speaker A

You're incredibly accessible. Kara. Any, any final thoughts?

37:21

Speaker C

No, it's, it's, it's just great. And thank you so much and I am really looking forward to talking with you more and, you know, figuring out how we can make Boston the official hub for ethical AI.

37:26

Speaker A

Well said. You both inspire me to get on an airplane, but thankfully to our audience and to me, we can have these conversations. Please come back and consider yourself in the Babe tribe. Building AI Boston. We're celebrating Women's History Month and you are certainly a woman making history. Thank you so much for being here.

37:37

Speaker B

Thank you so much for having me. This was so enjoyable and I look forward to chatting again.

37:57

Speaker A

You will definitely be back.

38:02

Speaker C

Thank you.

38:04

Speaker A

Bye. John sue, thank you for joining us on Building AI Boston. Stay tuned for more enlightening episodes that put you at the forefront of the conversations shaping our future.

38:05