The following is a conversation between Rumman Chowdhury, and Denver Frederick, Host of The Business of Giving on AM 970 The Answer in New York City.


Denver: Artificial intelligence is becoming more prevalent across the world and in our individual lives every day. There are those who believe AI and robotics will take away our jobs and purpose and maybe even destroy humanity altogether. Others are excited and believe it will usher in a golden age the likes of which we have never seen. But my next guest maintains that neither scenario is preordained, but depends upon what we, human beings, decide what we’re going to do with it. She is Rumman Chowdhury, the Global Lead for Responsible AI at Accenture. Good evening Rumman, and welcome to The Business of Giving.

Rumman: Thank you for having me.

Denver: There is so much to talk with you about, I hope you don’t mind if we jump around a little bit, but I’d like to start with how ubiquitous AI is going to be in our lives in the years ahead.

Rumman: To be honest, Denver, it’s already quite ubiquitous. So many of the interactions we have today are mediated with AI. Most people for example, are using a dating app. Guess what! Dating apps use algorithms and AI. Literally, who you might date and marry is being decided by algorithms. The kinds of jobs you get, how employers look at you, what kind of movies you watch, when you go out, what kind of experience you have. Many many things in our lives are being suddenly nudged, pushed, and shaped by artificial intelligence.

My background is as a quantitative social scientist, or as I like to say, I operate at the intersection of humanity and technology. I look at technology as a tool for human beings to use. My interest in things like data, algorithms, etc., came out of wanting to understand patterns of human behavior.

Denver: Listeners can better appreciate the lens through which you look at all these. Share with us a little bit about your background, which is somewhat different from a lot of the people working on AI and Silicon Valley. I think that different perspective really is interesting in this burgeoning field.

Rumman: My background is as a quantitative social scientist, or as I like to say, I operate at the intersection of humanity and technology. I look at technology as a tool for human beings to use. My interest in things like data, algorithms, etc., came out of wanting to understand patterns of human behavior. My PhD is in Political Science. I’ve also studied Political Theory, and my masters is in Stats. So, I look at the algorithms, but really the important part for me is these algorithms as a tool to understand the direction that people are going in.

What I worry about is us falling back into traditional paradigms of competitive language.

Inherent in technology is globalization and decentralization and yet we’re putting it…we’re shoving it back into these aggressive combative paradigms frankly because I think we have no other way of thinking about these things often.

Denver: That’s a very interesting blend. I am the most casual observer of AI that you’ll ever find but got to tell you. I always get a little disturbed by the language around it, and so much of I see is this competition between the US and China. Who’s winning? Who’s going to control the future? Who’s going to control the 21st century. It sounds a lot like the arms race we had with the Soviet Union during the Cold War. Do you find that to be the case and how do you feel about that?

Rumman: What I worry about is us falling back into traditional paradigms of competitive languaging. There was an article in the Economist entitled The New AI Arms Race, and it was about the US and China. The field isn’t meant necessarily to be economies versus economies and people versus people. In fact, one of the greatest things about the tech industry is that I can hire researchers or data scientists in India, in the Ukraine, in Bangladesh. People can get upskilled d by going online to a website where they can learn stuff for free. Inherent in technology is globalization and decentralization and yet we’re putting it…we’re shoving it back into these aggressive combative paradigms frankly, because I think we have no other way of thinking about these things often.

It’s very interesting because you never refer to your car that way or your toaster that way. It’d be rather ridiculous. But we’ve taken this thing, artificial intelligence, which is literally; I can promise you; it’s code. It’s programming. It’s built by a human being. We’ve humanized, we’ve anthropomorphized it. Then when something goes wrong, we shift the blame from the human that’s created it, and we put that blame on the algorithm.

Denver: I think you’re probably right. Thinking about these things, you have coined a simply wonderful term called moral outsourcing. What do you mean by moral outsourcing?

Rumman: This touches on what I was just talking about. When we talk about artificial intelligence, we don’t use the same words that we use when we talk about other technologies. So, you’ll see articles like this was a racist AI or a sexist algorithm. It’s very interesting because you never refer to your car that way or your toaster that way. It’d be rather ridiculous. But we’ve taken this thing, artificial intelligence, which is literally; I can promise you; it’s code. It’s programming. It’s built by a human being. We’ve humanized, we’ve anthropomorphized it. Then when something goes wrong, we shift the blame from the human that’s created it, and we put that blame on the algorithm. So, we’re taking the morality, the responsibility of what happens and the outcome, and we’ve outsourced. We’ve shifted it to the algorithm. We’re saying it’s not my job. I didn’t know this bad thing would happen. I’m just an engineer which is actually language we hear. I didn’t know that would happen. I’m just an engineer. We have a responsibility for the products we build. There is no other field or industry where the people would say something like, “Not my fault. The car hit you”, which is if you think about self-driving cars, that is the kind of language people might be using. It’s literally semantics. It’s how we’re using the adjective to modify the noun is totally different than we talk in any other technology.

Denver: As we sort of said at the beginning, human beings have to control this, and we do talk about AI; it’s a power or force onto itself that is just out there, and it’s going to do all these things either good or bad but we don’t have a say in all of it.

You work out in Silicon Valley, and that’s a place that was envisioned to find solutions to the world’s most vexing problems. Do you see companies thoughtfully thinking about AI along the lines you just mentioned and taking into account ethical considerations when they’re creating a product?

Rumman: People are starting to. Unfortunately, I feel like Silicon Valley is behind many many other parts of the world. I go to London quite a bit. London has one of the biggest scenes in ethics in AI, and it’s a complete joy to be there because of the level of sophistication in conversations, etc. that I have. It’s not just in London. Most of the European countries I go to; Brussels, etc., because of GDPR, people are talking about privacy, security, ethics. Things like that. In Silicon Valley, the conversation’s a little bit different. We have things like the Time Well Spent Movement which talks about addiction to apps and products that are being built, etc. but very little yet about formal solutions on ethics and AI.

Denver: As you mentioned a few moments ago, so much of our lives are being governed by algorithms right now. Our credit score, whether we get a mortgage, whether we get offered a job, as you said, whether you get the date you want or not. But these algorithms just might be a little less than perfect. There could be some problems. I know Accenture has come up with a new fairness tool to help address this. Give us some detail about these kinds of problems and what you hope this fairness tool might be able to do.

Rumman: What I love about the fairness tool is that it’s a combination of, in my opinion, all the things that I’m interested in. All the things that I have studied. The first question we ask is, what is fairness? What does it mean to be fair? It is a deeply philosophical question that goes back to Aristotle when he wants to understand things like human flourishing, societal fairness. That was just what that speaker pulled together. He’s a professor at Princeton. He did this really great talk. This conference on 21 definitions of fairness. Inspired by his talk, I emailed and asked for his class syllabus. So he sent over all these papers. I did a week-long, something called the data study group with the Alan Turing Institute in London. We came up with the early stages of this tool. What this tool does is looks at both your data and your algorithm to determine whether or not the outcome is fair. So, back to this definition of fairness. What I realized, a lot of our talk about bias and algorithms is actually incorrect. I think human beings, we understand that things screw up sometimes. Trains break down like mine did this morning. People make wrong decisions, etc. What we’re not okay with is if the outcome is systematically more wrong for some people than it is for others. What our tool does is it creates predictive parity. In other words, it’s equalized outcomes. If anyone listening is a lawyer, this idea of disparate impact is actually quite built into the legal framework, and you can say disparate impact means, I built something, and it affects one group of people; let’s say, African-American men different than it affects another group of people, let’s say white men. If it’s something like a protective class like race, that is actually illegal or unfair or just something we consider to be wrong.

Denver: When you make an algorithm more fair, is there a tradeoff? Do you also maybe make it a little bit less accurate?

Rumman: That might happen. Without boring people with massive amounts of statistical mumbo jumbo, what I’ll say is there are many ways to measure how good your algorithmic output is. Accuracy seems like; “Oh, accurate, great”, but that is not the only way to look at your models. An example I can give is let’s say, you say that my model outcome is 95% accurate. When I break it down by the different subgroups, we see that it is 98% accurate for white men and only 68% accurate for darker-skinned black women. Those are actually the numbers around most facial recognition algorithms. My good friend did MIT Media Lab, Joyable Amini; that’s her research. That’s what she’s found. But then can we actually see it’s 95% accurate then? It’s problematic. There are other ways that statisticians know how to look at our models and look at our output in ways that are more than just accurate. We look at things like goodness of fit, and there’s all sorts of ways of thinking about your model and not just this number of how many things did I get right out of this testing set that I’ve pulled together?

Denver: Let’s talk about a tangible example. Predictive policing. That seems to be an intelligent way for a police department to deploy its officers. What can be some of the potential flaws with that?

Rumman: Denver, I’m going to give you a little sneak preview of the next thing I’m working on. It’s something I call technological determinism. Inherent in how we build our algorithms; how they are built. We have data. I built some sort of a model using that data and then we have some sort of prediction that the model makes. Inherent in that is there is some sort of information in my past that can inform my future. That’s fine for movie predictions and what kind of shirt you might want to buy or whatever. But when we start to talk about things like what jobs we should have or… again, the kind of things you might want to read; what I worry about is the most we’ll ever achieve will only ever be a function of who we’ve been before. And that is such a problematic thing to think about. We think about our heroes in literature or in history. These are not people who just did what they were supposed to do. Our heroes are people who defied the odds, who came from nothing. That is the American dream. That is the immigrant dream; to come from nothing and to make something of yourself. Here we are creating algorithms that limit what we see, the jobs we can get, literally where we can go, who we talk to, how good we are based on things like how much money did your father make? Technological determinism from a technical standpoint are two things. Something called the feedback loop and something else called measurement bias. Back to your question about predictive pleasing, and it’s a perfect example of technological determinism because both have a feedback loop and measurement bias. When you create a predictive pleasing model, what I’m doing with this model is I take crime statistics, I build some sort of a model, and I say, I’m going to send out my police officer based on my prediction of where crime will be based on where crime has been in the past. So, measurement bias. Crime or measurement of crime and doing it goes here does not actually measure true crime. It measures reported crime and number of arrests. There are plenty of crimes that happen that are never picked up by our crime statistics. Back to this idea of bias being equal everywhere, the likelihood of a crime being reported or not; reported is not random. More affluent neighborhoods probably actually have low reportings of petty crimes like drug deals, etc. then you go do something like send the police officers to the places “where higher crime exist”. Guess what they do? They find and arrest more people. Then you create the feedback loop. You prove your own point. You say, that’s a dangerous place, I send more cops, they arrest more people. Oh look, I was right, the same dangerous place. Predictive pleasing is a perfect example of technological determinism. We are now isolating and identifying places that we consider “bad”, and this is based on not actually an accurate measure of where crime is happening and frankly, it’s informed by racist police practices, racist informing practices, all the things that are coming up today where are people are reporting how there is massive discrimination against particular communities. Now, we are algorithmically enforcing it.

Denver: Good explanation. Let’s turn our attention to augmented humanity. That’s already here. We got bionic limbs and LASIK surgery and neurotechnology and Crispr and gene editing. Boy, we are in the early early innings. What are the next few innings going to look like, and what are the ethical concerns that we really need to be mindful of?  

Rumman: We’re at the critical juncture when it comes to things like augmented humanity. Right now, we’re mostly at the stage where we take somebody who is disabled, etc. We give them the ability to have an arm again, see again, walk again. There’s a lot to unpack there in terms of ablism. There’s this really great professor. Her name is Ashley Schuh. She does work on tech enablism. The next step where augmented humanity is headed is to take perfectly, physically-abled people and augment ourselves. Now, I could run faster or my parents could edit my genes in utero, so I would be taller or have a certain color eyes or hair or whatever. Interestingly, some of the conversation we had in the 90s when talked about gene editing and designer babies; that’s actually where we are now. It took longer than expected but my fear is that, we’re already born into an unfair world. Really, one might say that the one thing we all have in common is we live in this very fallible human bodies. But now what we’re giving people the ability to do is even transcend that. It would be simply a function of money. How much money, contacts, and resources do you have? It’s actually coming up a lot now when we talk about Stuyvesant and all these schools that use these tests to get in, and what some people have said is that, it’s a great way to get lower incomes students or people from families that couldn’t afford expensive school tuition to get a level playing field. But imagine, your kid is down in a class with other kids who can afford brain implants, so that when they read, everything they read is memorized and then it’s just fed back into their brain. The way you were born is now subpar because other people are getting bionic eyes and arms and limbs, etc. or they’re just genetically being modified to be extremely handsome or beautiful which we now actually gives you a leg up in the world.

Denver: So, how do you address this?

Rumman: It’s not something people are really talking about, and I understand why. It’s so further out, and there are very pressing, more immediate needs of algorithmic bias, etc. But I do think there needs to be regulation, a moment of reckoning, and actually thinking about this technology, and we can’t wait for people to start doing it. Trans-humanism is already a bit of a movement. Right now, it’s frankly mostly parlor tricks; I get an RFID chip in my hand and I can turn on my phone by waving it. But we’re not far away from getting to more real, viable things.

Rumman Chowdhury and Denver Frederick inside the studio

Denver: And I think at the heart of it too is the system of capitalism. In terms of conscious capitalism and ethical capitalism, we really have to think about whether we can all move together with this or whether we are going to have an even greater discrepancy between the haves and have nots.

Rumman: Right. This is a bit of a crisis of conscience that we see not just in tech but even in other industries. We have Larry Fink’s famous letter that went out about this notion of capitalism or companies need to be about more than just making money. I think a lot of people in Silicon Valley will agree. We just have massive amounts of wealth. We get it. We made money. Congratulations. Now, what?

Denver: They’re not the best winners in the world.

Rumman: I wonder what kind of… I think about this in San Francisco all the time. San Francisco is a Dickensian city. I live in an apartment where I pay $4200 a month in rent to be in an 88 square foot one-bedroom apartment. I look out onto the street where people live in tents. Is that the world I really want to live in? Do I really want to live in a Victorian era city where I have to pay insane amounts of money even though on paper, I’m making excellent salary, and I look out on the street and there is somebody who lives in a tent? What kind of world have we built?

Denver: I’ll tell you about the world we built, one thing that really blows me away, Rumman, is this social credit score that you see in China now…for listeners who are not familiar with that, explain what it is and what you think about it.

Rumman: A social credit score is kind of like the credit scores we have in the US but it pulls in things like your family history, your level of education, your shopping behavior, your social media activity, your criminal record, and it creates a number for you. Essentially, is it that Black Mirror episode from season 2 where you have… Black Mirror episode is a little bit different because individuals could rate you. In this one, they can’t directly rate every intraction but it’s pretty much an extension of that. It’s our credit scores, we have it today. But hopped up on social media, your purchase behavior activity, etc.

Denver: Jaywalk, their catching you.

Rumman: And they’re publicly shaming. What they’re doing is this, facial recognition to see if you jay walked, they put you up on the jumbotron screen with your name and a bit about who you are just to shame you for breaking the rules and jaywalk. I think most New Yorkers are cringing at that right now. I probably jaywalked seven times on my way here. How else would I get anywhere?

Denver: The other thing that’s really freaky about are the consequences. You can’t travel. You pay the fiver if your score drops below a certain level.

Rumman: And this is technological determinism. You commit a crime and then it says, okay, because you’ve committed this crime, you now can’t travel on these types of transits or to these locations or on this level, on this class of a train or something like that. We’re limiting where people can go. But you think about it, what are we assuming here? We’re assuming that someone who does a bad thing is inherently a bad person. This is problem with all of these. Just because one has committed a crime, does that inherently make you a bad person? We’re codifying this in our algorithms. We’re saying, oh bad thing, then you go into the bad person bucket. But we limit your opportunities in life, so how are we allowing a “bad person” to then improve themselves?

…we see that in social media where people are one extreme or the other. One thing I hear quite a bit is that, all of us have gotten better at curating how we act. We police our own behaviors on social media because we wonder who’s watching.

Denver: Unfortunately, what we’re doing is we’re defining an individual by the worst thing they’ve ever done in their lives. That’s pretty sad.

Rumman: Absolutely, and we see that in social media where people are one extreme or the other. One thing I hear quite a bit is that, all of us have gotten better at curating how we act. We police our own behaviors on social media because we wonder who’s watching.

How do I empower people to do the right thing but also, how do I create a good chain of command? How can that idea of doing the right thing be a prominent business decision? How can we be proud of things like saying no to a contract because it was unethical? And that’s the direction we need to head in. People will not unilaterally act that way. They will act that way if they are rewarded.

Denver: You do a lot of things for Accenture but one of the things you do for your clients is you get them to think about a culture of ethics in an organization because if you’re working in a place, you really can’t stop and think about ethical considerations too easily if the leadership is saying get the product out and get it out ASAP. What do you advise your clients, what questions do you have them ask themselves, and how do you guide them to think about this issue?

Rumman: One really positive thing I’m seeing from a lot of Accenture clients is, people are very very interested in this culture of ethics. I don’t have to sell too hard. I can build all the fairness tools in the world and help data scientists etc. with products but it wouldn’t matter if we have no accountability and agency. My 2018, 2019 goals for Accenture and when I’m building out these on the concepts of agency and accountability. How do I empower people to do the right thing but also, how do I create a good chain of command? How can that idea of doing the right thing be a prominent business decision? How can we be proud of things like saying no to a contract because it was unethical? And that’s the direction we need to head in. People will not unilaterally act that way. They will act that way if they are rewarded.

Denver: Sticking to the corporate world for a minute, we think about all the ways AI is being used, is it being used by human resource departments, human capital departments?

Rumman: One of the biggest ways AI is being used right now by a non-immediately tech audience is in hiring and human resources. We have companies that judge video interviews by how enthusiastic your face looks and how well culture fit you might be. I have a lot of questions about how that might work. I have a lot of questions about how that might go wrong. We have natural language processing being used to parse out resumes, and I think about or worry about with that is, are we pushing people into these very traditional buckets of, oh, you went to this good school therefore, you’ll be a good hire, when actually what we know about the future work because it’s no longer a function of what school you went to; frankly, what one would say it’s never been a function of what school you went to but about other characteristics. We’re using AI in human resources a lot, and I understand exactly why. Companies are inundated with probably millions of resumes a day, some of them. There has to be some way to parse through. There’s no good scientific systematic way but I wonder if the right sorts of ethical considerations are being put into place.

Denver: You work with these companies again on internal governance around AI and maybe one of the reasons you do it so intently is because the government hasn’t done all that much or have they? Where do we stand with regulation and guidelines being put forth by the government?

Rumman: The good news is the government has finally started to look at AI more, and I know there’s been a committee formed. I will say though that we are behind other governments. McKern came out with a really really good strategy for France. The UK House of Lords; I actually sit on the parliament to the advisory board for the parliamentary group at the UK House of Lords, and I’ve sat on some of the meetings. Really amazing conversation. They have a really wonderful report; it’s called the APPG, the Parliamentary Group has come out with. This is after a year of having meetings, deliberations, etc. A really comprehensive report. Obviously, we know China has a comprehensive national plan. Canada is investing a lot of money. We are a bit behind. Just a bit.

It’s not about just create some advisory board, pull them together once a quarter, and pat yourselves on the back for no longer being racist. It’s actually about sparking a useful dialogue where both people are learning.

Denver: Let’s switch over to the nonprofit sector because, boy, those issues around ethics and AI are as important, probably even a little bit more important in the nonprofit sector. We’re dealing with human beings all the time in terms of our clients. It’s nowhere near as far along as the business sector. A couple shining lights like Crisis Text Line and Uptake.org but what advice would you have for a nonprofit CEO as they begin to think about embedding AI into their organization and particularly into their programs?

Rumman: I think this is an amazing time for nonprofits to shine in this space. So, we were just talking earlier with this idea of conscious capitalism, etc., the point of conscious capitalism is not the capitalism. It’s the conscience part. This is where I think nonprofits play a role. Rather than thinking about AI is just yet another tool you buy or yet another product to integrate, think of it as a dialogue that you need to participate in. As I mentioned, companies are quite interested in understanding ethical culture, ethical behavior, etc. Frankly, what I hear the most is I don’t really know how to do this. How am I supposed to know if some community is being marginalized or impacted, I’ve never had to think about these things? This is where activist groups, nonprofits, etc., play a very very strong role because they have been doing this stuff for a very long time. There’s so much to be learned across the board. One of the projects I’m working on – I represent Accenture at the Partnership on AI – and one of our working groups is about fairness, accountability, and transparency. I’m working on a project called Diverse Voices which is about … which is built on a project that already exists but it’s about augmenting the voices that these groups, for example, Amnesty International and WACP, etc. – have in influencing the dialogue on AI with corporations. It’s not about just create some advisory board, pull them together once a quarter, and pat yourselves on the back for no longer being racist. It’s actually about sparking a useful dialogue where both people are learning.

Denver: That’s right because I would imagine a lot of people in Silicon Valley come from the same background and don’t have the empathy that you would have from some of these groups or the disabled or older people; they all come from their own perspectives.

Rumman: There’s way too much culture homogeneity in Silicon Valley. I am an anomaly because I’m a social scientist. That will tell you what little diversity there is in our cultures. Frankly, most of us, we were born very particular way. A lot of us come from middle-class households. Most of us are, as I call it, culturally American even if we’re not ethnically white. It’s harder and harder to find people from varied backgrounds, varied income levels, varied levels of representation, and what’s sad is, Oakland is a city full of really really wonderful programmers, data scientists, etc. who were just never brought into Silicon Valley. I think some of the numbers that have been released are abysmal; 2%, 3% of the entire workforce being black or Latino is incredibly problematic.

Denver: Oh no question about it. Let me close with this, Rumman, speaking directly to our listeners out there. Someone who’s not involved in the field but a person for whom AI, as we discussed, is becoming a bigger part of their lives every single day, what words of caution, what words of wisdom would you leave them with?

Rumman: The individual person has control over the outcomes that are coming in the next few years. We’re not meant to take a backseat and just let the world happen, and it may seem like it that these massive, multibillion-dollar companies that are seemingly controlling our lives but our actions we take, we have smartphones, we watch Netflix, we do all of these things where we engage with the technology. We need to demand more. We need to demand better. We need to ask questions. When we do so, that’s when people will be better. That’s when companies will be better.

Denver: Well, Rumman Chowdhury, a Global Lead for Responsible AI at Accenture, I want to thank you so much for being here this evening. Despite all we talked about, we’ve only really scratched the surface of the issues that you addressed. Where can people learn more about your work?

Rumman: You can visit my website, www.RummanChowdhury.com. You can check out my Twitter, all of it LinkedIn, etc. It’s all linked from my website.

Denver: Thanks, Rumman. It was a real pleasure to have you on the show.

Rumman: Thank you very much, Denver.

Denver: I’ll be back with more of the Business of Giving right after of this.

Rumman Chowdhury and Denver Frederick


The Business of Giving can be heard every Sunday evening between 6:00 p.m. and 7:00 p.m. Eastern on AM 970 The Answer in New York and on iHeartRadio. You can follow us @bizofgive on Twitter, @bizofgive on Instagram and at www.facebook.com/businessofgiving.

Share This: