The following is a conversation between Vilas Dhar, President and Trustee of the Patrick J. McGovern Foundation, and Denver Frederick, the Host of The Business of Giving.


Denver: Vilas Dhar is President and Trustee of the Patrick J. McGovern Foundation, where he merges technology and philanthropy to address social challenges through artificial intelligence. With a background in computer science, law and entrepreneurship, Dhar advises international bodies such as the United Nations and OECD on AI.

Under his direction, the McGovern Foundation funds innovative AI projects that aim to benefit society. He’s also a respected voice on the ethical use of technology for the public good, and he’s with us now.

Welcome to The Business of Giving, Vilas.

Vilas Dhar, President and Trustee of the Patrick J. McGovern Foundation

Vilas: Thank you so much, Denver. I’m delighted to be with you this morning.

Denver: Tell us about the foundation, its history, mission, and what it aims to achieve.

Vilas: So the Patrick J. McGovern Foundation is a new kind of philanthropic institution created just over the last decade and really within the last few years. We came into being through an incredibly generous gift from Patrick McGovern, who was a publisher, a businessman, an international globalist through the latter half of the 20th century.

But in his gift, he actually asked us to think about a world that was possible, one where information technology, neuroscience, and new innovations would lead to new opportunities for people all over the world. When we sat down to figure out what that might mean, we realized that we were in the very early innings of a major transformation that was going to happen because of artificial intelligence.

Now, in my perspective, this wasn’t and isn’t a conversation about a technological revolution, or a conversation about the innovation itself. That’s an important part of it. But what’s much more exciting is how these innovations are changing the ways that we think about very fundamental assumptions, about how our political system is structured, about what economic opportunity means, and about how we think about basic dignity and human rights across the world.

The foundation steps into that by saying: If AI really is going to be a driver of incredible opportunity, then we can no longer do what we’ve done around previous technological revolutions, which is sit back and say: Technology companies will innovate for us and we’ll figure it out as consumers of that tech.

Instead, we need to have a different posture. We need to think about these powerful tools as something that’s owned by and for communities where we create the futures we want… we create the tools and products, and we think about building social resilience for the future.

Denver: Yeah, so a lot of co-creation in that answer is what it sounds like.

And also what I heard there, Vilas, is that there could be a very, very special role for the nonprofit sector in this space. Would that be correct?

Vilas: I think that’s right, and to be honest with you, I think it’s a primary and central role that nonprofits– who so often are asked to be responsible for the problems that face humanity in solving them– that when it’s climate change or malnutrition, or a lack of education, or a failure of a governmental system to provide a service, we look to nonprofits, to their incredible and visionary leaders, to their powerful frontline staff to step in and do something about it.

And yet, it strikes me that particularly over the last few decades when we’ve asked them to do that, we’ve also limited their access to some of the most powerful tools that could help them deliver on their mission. That if we ask leaders and nonprofits to be proximate to problems, we should ask them also to be proximate to solutions.

So the goal here in our work, and we can talk more about the how behind this, is really to say: What’s the role of philanthropy in bridging that gap? Between where technology capacity has traditionally sat and those populations who need it most, whether it’s communities directly or the nonprofits that serve them, and in bridging that gap, to raise up nonprofits really as the architects of our digital future.

“I could tell you hundreds of stories like this, of amazing people who are stepping forward and saying: How do we use these tools to do something great? Our role as a foundation in this body of work is to find those leaders, those nonprofits, and support them with the capital, but also the technical resources and know-how to help them deliver on that promise.”

Denver: Yeah.  That’s very inspiring to hear because so often, nonprofits are the afterthought, and this is a place, and particularly when it comes to the ethics of AI… and we’ll get to that in a few minutes, where nonprofits could be the lead and really set the example for others to follow.

Well, last year, the organization funded some, I don’t know, 150 organizations nearly, many of those you term, “the missing middle.” How does the foundation identify and select which initiatives to support?

Vilas: It’s a great question. Let’s take a step back, Denver, and talk about what’s happening in the world around us right now that means that these 150 organizations have a special role to play.

We talk a lot about AI, and that word has really become a buzzword over the last couple of years. You can hear it everywhere, and it’s almost impossible to escape. But if you really take a look at what’s happening, again, it’s not really a conversation about AI, it’s about how it’s changing how people live their lives.

We’ve seen that people interact with AI in all kinds of ways– From the voice assistant on your phone, to the way you book travel, to the way that credit decisions are made, and maybe in some more pernicious ways as well where systems are using AI to figure out how to think about criminal sentencing, or to think about the ways the government resources are allocated, potentially with some biases in them.

Now in this conversation, two very vocal minorities have emerged. One is a set of folks who like to think about existential risk as it’s called– The idea that we’ve all heard, maybe through Hollywood movies or through media headlines, that AI is out to get us and maybe will be the end of the human species… and other kind of like bombastic statements.

Now it’s important that we consider these risks. We absolutely should. And yet at the same time, we have to keep our eyes focused also on how these tools can actually deliver some really great outcomes for us. So if at one end you have existential risk, at the other end, we have this very short-term, I think, fear and anxiety that many of us share about AI. We don’t really know what it is and how it’s going to affect our lives.

And we see these headlines about bias and algorithmic issues and maybe much more pernicious, again: Who controls these tools, and for what reason? But as I said, in between these two extremes is something that could be much more productive, that could give us a space for a real civic conversation about how AI can be used to advance human interests, about how even as we consider risks and we consider ethical use, we think about taking these powerful tools and apply them to powerful human challenges.

So in this space then, we really focus on identifying those nonprofits that are really at the forefront of applying technology to social challenges. Some of these organizations work on issues like climate change, where they’re using data and AI to do things like build digital models of the Earth so that you can actually go in and ask it questions like: Here is an example of an artisanal gold mine. Now that’s a code word, Denver for actually an illegal gold mine, often in the Amazon.

Denver: I get it.

Vilas: And what we can do now in this incredibly powerful expression of this technology is we can look at a map of the Amazon; we can see something that to our human eyes looks like it might be an artisanal gold mine.

We can go in, and just using a mouse, without any knowledge of AI whatsoever, kind of draw it out on the map, and then we can query the AI system and say: Show me all of the other sites that look like this, that have appeared on satellite imagery just in the last month.

And when we do things like this and the system comes back and looks through millions of pictures of the Amazon and says: Here are 30 sites that have appeared in the last month. Now all of a sudden, we can engage our governments, our civic authorities to say, Hey, look! We’ve got data for you that you probably wouldn’t have gotten if you’d just flown planes over the Amazon for years; you probably wouldn’t have found all these sites. Let’s go do something about it.

Denver: Yeah. Yeah.

Vilas: So that’s a specific example. I could tell you hundreds of stories like this, of amazing people who are stepping forward and saying: How do we use these tools to do something great? Our role as a foundation in this body of work is to find those leaders, those nonprofits, and support them with the capital, but also the technical resources and know-how to help them deliver on that promise.

“As long as we feel like civil society is in the position of being a consumer of what somebody else creates, we’re never going to be in the driver’s seat.”

Denver: Yeah. No, that’s interesting to hear because I think in the sector, we think so often of using data to measure impact. And what you just described is using data to drive impact and have it upfront, and to get to those public officials with the information to be able to say: something’s got to be done.

And that’s just really turning it very much on its traditional head as to the way that we think of data. I know you do an awful lot to be sure that the communities are empowered through technology, because I think we always worry that these projects are going to deal with critical issues. But are they really going to let those communities most affected by the challenges be central to that? Tell us how you go about that.

Vilas: I think too often, we have lived in a paradigm where technology is something that’s used against us. Sometimes it’s something that is done to us, and maybe in a few glimmers of hope, it’s done for us. But very rarely, Denver, is technology something that’s done by us.

And I think that’s quite a powerful framing of the kind of transformation we want to see in the world, because it doesn’t matter if we’re talking about a nonprofit that’s trying to figure out donor management and going to one of the big commercial vendors, or if it’s exactly what we just talked about, which is building a novel solution that addresses a problem. As long as we feel like civil society is in the position of being a consumer of what somebody else creates, we’re never going to be in the driver’s seat.

So we think about this problem through three lenses. The first is an obvious one. How do we make sure that organizations have enough capital to be able to go out and do what they need to, to become really good and literate at AI and database issues? Then can we make grants, and can we bring together a convening of funders who really think about this as a structural part of their work and make resources available to nonprofits across the planet?

But something interesting came to us as we started this work at the Patrick McGovern Foundation. We talk to nonprofits, they tell us, “Hey, we’re interested. We have great ideas, we need resources.”  We provide a grant. 

And then about three or six months later, they’d come back to us and they’d say: You know, we are so appreciative.  But at the same time, when we went out and tried to find somebody who could help us figure this out– a consultant, a data scientist, an AI engineer, we found that that talent pool just wasn’t readily available. That anybody who had these skills was being hired into these massively paid jobs inside of private technology companies.

So the question became, Well, what’s the role then of philanthropy of bridging that talent gap? And one of the things that we realized was we had to invest in building institutions of technological capacity that sat inside of civil society.

We built programs here inside the foundation where I went out and hired data scientists and AI engineers and folks who were really at the forefront of this work and said, “Look, we will bring you in. We will pay your salaries, of course, but we’ll also give you a deep sense of mission, and then we’ll make you available to our nonprofit partners as experts to come in and help guide them through their journeys.”

And that turning point, moving from what I think of as a 20th century understanding of philanthropy, of, we know, we’re check writers to a much more active, engaged participant. Where we now have relationships with hundreds of nonprofits. We do everything from providing webinars and online materials, all the way through accelerators, through to doing deep, engaged, multi-year partnerships with our partners to build long-term stories of outcomes.

This has been the transformation of thinking about how nonprofits own their technology futures, and it shouldn’t be that philanthropies are the only source for this. Really, I hope that technology companies join us in this effort, and maybe most critically and importantly, this first group of nonprofits that have become leaders… open up their doors as well and share their knowledge and learnings with our entire community of affiliates so that we can really build that kind of networked approach to shared growth.

Denver: Yeah. No, that’s a very commendable approach. I mean, the complaints I hear about from a lot of nonprofits is that technology is built for the corporate sector, and then it’s squeezed into the nonprofit sector with : Make it work for you, I don’t care what you do. Cut it up, chew it. But the idea of really making nonprofits central and getting AI fluency and literacy at that level…

Let me ask you about your organization in terms of how you’ve integrated AI into your operational and strategic initiatives and how it changes an organization, how it can actually change the corporate culture of a nonprofit organization where AI plays a central role.

Vilas: It’s an amazing question because I think it gets to the core of where AI can be most powerful.

When I speak with leaders of organizations, foundations, nonprofits, and certainly in the private sector, everybody wants to know that first bit, right? How do we use this? And I think it’s so critical that we build a culture of use, that whether you are a CEO, a frontline program officer, somebody in the kind of operations and administration team, knowing that these tools aren’t some foreign, crazy, super complex thing, but something that you can download to your phone and you can use all day long. And that’s really the start of it, is a culture of curiosity about AI.

If I look at my phone, I probably have five or six AI-driven apps. And I’ll tell you, I’m a technologist, but I’m not a kind of naive optimist, right? Like I look at these tools as things that I evaluate to see how they can help us. But I often find myself going to them to do things like saying, Here’s a speech I’m writing. Don’t tell me what to say. I’m not interested in what an AI has to say about my core ideas.

Denver: Yes.

Vilas: But instead to say, This is the speech that I’m writing. Ask me questions about my speech that might help me make it better. Tell me what might not be clear to somebody who’s reading it. In a much more functional way, I think you’ve seen there are a lot of tools that are coming out that help everybody write better emails, function and schedule calendars and so on.

But to me that’s very first order, right? That’s kind of the early advances of AI that are changing our lives, and that curiosity helps us become familiar and comfortable with them. But really, it’s the second order stuff that’s really fascinating. It’s when we are able to say things like: We know that nonprofits have been incredible stewards of data about communities for so long.

For example, an organization that’s done frontline healthcare for 500,000 people in rural Rajasthan that we work with. For 20 odd years, they’ve been able to collect and aggregate interesting data about medical outcomes. Well, now with AI, they’re able to integrate it into their systems to say, Let’s take all that data and understand what kind of patterns we can pull from it.

And from those patterns, how do we deliver better care? How do we understand what villages or geographic clusters maybe have certain conditions that keep popping up? And can we then give that to our frontline practitioners to go out and figure out what’s happening and see if we can deliver an answer?

For organizations at scale, we don’t need to really focus, and CEOs and CTOs don’t have to say: We have to build our own internal-use AI immediately. That’s not really the answer. The answer should be more broad-based literacy about what AI means inside of the organization. Deep dives about what it could mean for strategic availability of delivering outcomes across their programs.

And then with support and help from institutions across the sector, figuring out what those high value use cases are and beginning to build against them.

Denver: What’s the roadmap for that? And I ask that in the sense that: Where do you get started? Now, I mean, you explained it pretty well there, but essentially, you know the paralysis that comes with something like this to an organization which is absolutely bogged down with this day to day and also are feeling a little fearful that they’re being left behind.

They need a first step. They need a small first step to get them started. What would that first step be?

Vilas: That’s right. And Denver, I believe so deeply in people in the nonprofit sector. I believe in their curiosity and their commitment and their conviction. And AI is something that is foreign and a little bit alien, but I think as people become comfortable with its incredible potential, I’m very confident that people in the nonprofit sector will step into it and own it.

In order to make that happen, you asked a great question: What’s the first step? Well, I think a few things. One is I think organizations need internal champions. And it can be really anybody in the organization, but somebody who’s willing to put their hand up and say: I’ve been looking at and playing with some of these tools, and I want to bring it to the organization in a way that helps people use it, become comfortable with it, and familiar with it.

Now, that can be anyone. It can be a CEO or a board member, a CTO. It can also be somebody on the program staff. It can really be anybody in the administrative staff, somebody who’s an internal champion. That’s step one.

Step two is for organizational leaders to recognize that growing familiarity and comfort, and ask the question that you asked, which is: How could this be used inside of our organization to better deliver on our promise, and to actually make sure this is a participatory process where people across the organization can come in and really pull those kind of threads?

We have to acknowledge we’re in the very early stages. There’s no playbook. There’s nowhere that you can go and download a document and say, We’re just going to check off the boxes here, and we’re going to be an AI-mature organization. Not how it works, but a shared sense of learning, a commitment to purpose.

And then the third is recognizing that there are resources out there, whether it’s our foundation or others who are willing to come in and support. We certainly won’t direct or guide, right? Because that’s not our role.

Denver: Mm-Hmm.

Vilas: But when an organization comes to us with a set of questions to the extent that we can help, we provide technical resources, we provide roadmaps, we provide access to tools. All of the different pieces that led an organization very quickly to accelerate into that kind of maturity.

Denver: Yeah. Yeah. That’s great advice, Vilas. And I also think with organizations, it can give them a tremendous sense of inspiration that we can really impact change at a level and at a scale that we only dreamed about.

And what that can do for morale and energy in an organization, you can’t even put into words.

Vilas: Exactly right. And I really do believe, Denver, that I am seeing across many of our partners this incredible and growing excitement and momentum about these tools, that even as… and I sort of alluded to this in the beginning… as a society, we’re still tentative, maybe a little bit scared.

Those organizations that are recognizing and using these tools are seeing how powerful they can be, and I think that can actually be a really important contribution to our bigger societal conversation.

Denver: Yeah, I think you’re absolutely right.

Let’s turn our attention back to the ethics of AI again. You touched on it some, but it is such a big issue. I mean, I think people are concerned about AI bias. They’re concerned about inclusion and equity, and all the potential that it has on one hand; it could also widen divide on the other.

What are the things that you do to mitigate? What’s the advice, the counsel, the initiatives that McGovern Foundation has taken to really address this issue?

Vilas: Yeah. Let’s take the question almost like an inverted pyramid, and we’ll start down at the bottom. We’ll work our way up. The smallest part of this conversation, but the one that gets the most attention, is the technology itself. And too often, we want to talk about ethical AI.

And let’s be honest, it’s often not technologists that are talking about it, but folks in civic society, and they’re really focused on this question of: How do we make sure that technology is ethical? Well, I have to tell you, I’m a technologist and somebody who spent a lot of time building social institutions. I just don’t believe that there is such a thing as ethical AI.

I think this ethics is a domain of humans. It’s a domain of our actions, our conscience, our morals, of how we behave. And in many ways, what that means is: it’s a function of how we design technology, how we implement it, and how we observe how it changes our systems. So let’s stop talking about ethical AI, and let’s think instead about ethical societies that are powered by AI.

Now, when we take that frame, it means that at a technology layer, there are a number of things that we need to do differently. We need to make sure that we have diverse, really talented, thoughtful people who are building these tools, that are bringing in this basic understanding of where bias can creep in, that are thinking about how we build safeguards into the technology, and so much more.

But as I think about that pyramid… and we move up one level, ethical use of these technologies just start with technology, but really, they’re about how we think about governing them, about when we’re okay with the idea that a technology might use the data of a vulnerable population.

What are the parameters under which that’s okay and the many, many more situations in which it’s not? How are we thinking about how that community participates in the economic value of deploying that technology? How do we make sure their privacy is guarded? How do we make sure that we’re not putting them at greater risk?

Now, these are questions that have nothing to do with lines of code. They have to do with the compacts, the contracts, the social rules that we hold people accountable to when they start deploying these tools.

And we spend a lot of time with our nonprofit partners, not merely talking about it, but hearing from them what they’re scared about, what they’re worried about, and turning that into principles and values that we can then hopefully extend across the sector to say: What does it look like to ethically deploy these?

Denver: Yeah.

Vilas: Now I’ll give you one last layer of the pyramid. All the way at the top, and this is the crazy stuff, right? And you alluded to it. For so many years now, from maybe even the start of the Industrial Revolution, every time we’ve had a major technology come into play, it has certainly, as many people would say, lifted all boats.

We’ve had incredible opportunities to reduce malnutrition and poverty to continue to kind of create economic opportunity. But I think we also have to acknowledge that it’s led to a deepening and widening rift in inequality, that those at the very top get the vast majority of the gains that come from these tools.

And with AI, I’m deeply concerned about that from an ethical, moral, and societal perspective. Is it that these few companies and the people who own them are going to take the lion’s share of benefit that comes from AI and leave the rest of us behind? So when we talk about the ethics of AI, I think we have to think about this, too.

What are the ethics of the way we’re thinking about distributing economic gains? How do we make sure that communities participate? How do we make sure that there are architects and owners rather than just users? And as we think about that, it takes us into a whole different conversation where AI becomes the backdrop, and we ask questions about justice and equity, about human rights, about what it means to live a good life.

Denver: Well stated. Let me take your answer in two parts. The first part, I’ve always been perplexed by. If I should get in my car and tragically hit someone, I hit someone, it’s my fault. But when AI has bias, it’s AI’s fault.

And you want to say, No, you’re the driver of AI. But there’s something about AI that we treat unlike any other technology as if it has a mind of its own. And it’s like, No, it’s what you stuck in there is what’s determining it. And I don’t know why it seems to be the exception to the rule, but it is the exception to the rule, and it makes absolutely no sense whatsoever.

And for the latter part of your answer, I know, Vilas, that you shared how you grew up near advanced technology in Illinois, but also in a rural setting with limited tech access. And I know how that has shaped your view, which was part of your last answer there. Speak a little bit about how it was shaped with those two different worlds that you inhabited.

Vilas: I’d love to, and let me respond actually to your first point also because I think it’s such a good one. Let me just tell you a very quick story that will feel very familiar. You know, in the early days of AI, one of the big use cases that we heard a lot about was using AI to screen resumes for people who were hiring in companies.

Denver: Right.

Vilas: Well, we are all familiar with one part of that story, which is when these systems were deployed, one of the very first things that became evident was these AI systems were often prioritizing some really uncomfortable traits.

They were picking male-sounding names over female-sounding names. They were prioritizing certain racial and ethnic characteristics. And I think we all very together and very reasonably had a reaction that said, “Wait, this is not good. These AI systems are really biased and very scary.”

Now, in many ways, the public conversation kind of stopped there, and we all moved on with the discomfort with AI. But I’m always curious about the second order outcome. I thought about that a little bit more, and we started asking the question: Well, why are these AI systems biased?

Well, the systems were biased because they were trained on decades of data… About how human recruiters were prioritizing resumes. So when we said these AI systems were biased, and we were outraged about that, I asked: Where is our outrage about the fact that we live in a system where for decades we’ve been using these same biases in human systems?

In the same way that we want to respond to AI bias, why are we responding to systemic human bias in how we hire and recruit in our organizations? Is it possible, and this is a reframe, that AI’s bias wasn’t a bad outcome, that it actually gave us that kind of lens, that magnifying lens, that we could look at our own systems and say: Where are we doing wrong?

Denver: Yeah.

Vilas: What are the other places that we could use an AI system to do this? Whether in criminal justice or in eviction rates, or in racial disparity in government services, think about all of the places we’re training an AI system on what humans do, and then forcing ourselves to engage with the stark contrast between what that AI system finally tells us is happening, and what we hope was happening in those systems.

Denver: Yeah. So I think the bumper sticker would be: AI is a Mirror.

Vilas: Absolutely.

Denver: I mean, essentially, we’re just looking at ourselves. And what we’ve done, we’ve taken the analog world, and we’ve just programmed it digitally.

Vilas: Exactly right.

Denver: And there’s all the biases that we had, which we somehow had forgotten about…

Vilas: That’s right.

Denver:that smacks us in the face.

Vilas: And so, look, I’m excited for this. I know it’s deeply uncomfortable, but if we have a tool that lets us really root out injustice, what an amazing toolkit for those of us who care about a better future! So that’s kind of the first part.

On your second part, I had a chance recently to meet an amazing leader in the climate debate who has a podcast, and the name of her podcast has just stuck with me. It’s called Optimism and Outrage. That’s Christiana Figueres, who was the head of the Paris Climate Accord and a number of other things …UN.

So, to me, that’s kind of also the frame in which I think about technology, is this incredible positivity about what technology could do if we applied it to all of these problems in the ways we’ve talked about.

But the second part of it is very deep impatience. If we know that it’s possible to do that, to take the billions of people on this planet who still survive in near poverty levels and create economic opportunities for them, and we know it’s possible, then Denver, why wouldn’t we be applying every fiber of our moral being, every resource we have, to making sure that world came as quickly as possible?

Denver: It makes the status quo more intolerable.

Vilas: That’s right.

“The greatest potential of AI is to let us own our own agency and realize that a better world is within our reach and our control.“

Denver: We have the tools now that we can hasten that.

Well, that is a great lead into my final question, and that would be, Vilas, if you could envision the most impactful way AI could transform our society for better in the next decade, particularly in these areas where we’re currently falling short on solutions, what would that vision look like?

Vilas: The greatest potential of AI is to let us own our own agency and realize that a better world is within our reach and our control. Let me tell you more about that. I mean, I think for 25 years, and I’ve alluded to this a few times, I think we’ve all, all of us, implicitly given over control on a whole set of decisions about what our world looks like to a very small set of technology people, private business interests, to government.

And I think now we look at AI, and it feels so foreign and far away from us that sometimes it’s very hard to engage with it, to ask the question: What does this mean for me? And how do I change the course of what’s happening with it?

Look, I could give you a hundred answers about how AI will fix our climate challenge and fix our educational system and fix our medical system, and they might hold a grain of truth to them that there’s a new possibility there. But if we don’t step forward as a society, as communities, as individuals, to take ownership about the decisions and how those technologies will be applied, we’ll never get there.

So that, to me, is the real possibility of the moment:  Does this new tool give us enough opportunity, enough reach, enough capacity to let every single person on this planet realize that we could be part of building our shared future?

Denver: Complacency is not acceptable. You’ve  got to step up.

For listeners interested in learning more about the Patrick J. McGovern Foundation, tell us a little bit about your website and some of the information they will be able to find on it.

Vilas: Wonderful. Well, please follow us on social media, on LinkedIn in particular, where we share evolving news about the AI field, about the work our incredible partners are doing, and some of our initiatives. You’ll also find our website at mcgovern.org, and please reach out in all the ways that we can be helpful in helping you think about this AI-based future.

Denver: Well, you’re doing a tremendous job in providing a tremendous service, not only to the nonprofit sector, but to society at large, and we thank you for that.

And I thank you for being here today. It was really delightful to speak with you.

Vilas: Denver, thank you so much.


Denver Frederick, Host of The Business of Giving serves as a Trusted Advisor and Executive Coach to Nonprofit Leaders. His Book, The Business of Giving: New Best Practices for Nonprofit and Philanthropic Leaders in an Uncertain World, is available now on Amazon and Barnes & Noble.

Share This: