The following is a conversation between Tom Adams, Co-Founder and Chief Strategy Officer of 60 Decibels, and Denver Frederick, the Host of The Business of Giving.

Denver: 60 Decibels is an end-to-end impact measurement company. Using the Lean Data approach, they speak directly to customers, employees, or beneficiaries, returning high-quality data in weeks to help organizations maximize their impact and grow their business. And here to tell us more about what they do and the impact they have had is Tom Adams, the co-founder and chief strategy officer of 60 Decibels.

Welcome to The Business of Giving, Tom.

Tom Adams, Co-Founder and Chief Strategy Officer of 60 Decibels

Tom: Oh, it’s a delight to be here. Thank you, Denver.

Denver: So tell us about the founding story of the organization and what you set out to do.

Tom: Well, thank you very much. So 60 Decibels began its life as a business line called Lean Data in the impact investor, Acumen, which is a not-for-profit impact investor based out of New York, which invests across the world. And I joined actually from having worked in Ethiopia and Nigeria for the Department for International Development, and had worked in aid and previously had worked in the finance sector. So coming to Acumen seemed like a perfect place, the combination of finance and also international development, and they had this problem: We need to do impact measurement.

And truth be told at the time, although I told everyone in an interview I knew how to do this like the back of my hand, I think to some degree I was “faking it until you make it” here and didn’t know exactly what I was going to do, but knew that it was a really interesting problem, knew that people were saying, “Look! Increasingly, we want to be able to judge social performance of our work. It’s a really important issue. It’s no longer good enough just to have good intentions. We actually have to know whether or not we’re performing. We owe that to the people whose lives we may or may not be affecting and unfortunate enough to serve.”

And so, I  joined Acumen, didn’t know what I was going to do. Threw a whole bunch of stuff at the wall  and eventually this one thing stuck, it’s called what became Lean Data, which was we would actually help companies listen to their customers’ beneficiaries, turn that data into high-quality social performance data. And we did that for our companies in the Acumen portfolio;  one or two investors came along and said, “Hey, we’ve got this great data that apparently you helped the company get. Would you do it for our companies?” And before we knew it, we had a business line, and that spun out three years ago, on April 1st… almost three… to create the business 60 Decibels.

Denver: 60 Decibels, what’s the significance of that name, Tom?

Tom: 60 decibels is the typical volume of human conversation. It might go up, it might go down, depends who you’re speaking to, but most humans speak at about 60 decibels. And we’re all about listening to people, allowing people to tell their stories, to give their views about their lived experience. And so that seemed like a nice name for a company all about human voice.

Denver: I think that’s pretty cool myself. You know, there’s this quote, “We fund endless studies to guide us forward toward a vague truth. Still the answer remains: We simply do not know.” I think that was Jed Emerson. In your opinion, Tom, what are some of the common shortcomings of the way we traditionally measure impact?

Tom: This is a topic I could talk on for a while, so I’ll try to be succinct. One of the things is that there have been so many different approaches used, and there just seems to be a complexity, a plethora of different approaches everyone’s coming up with,  trying to reinvent the wheel. And just yet, there hasn’t been a standardized way of doing this work, like the equivalent of financial accounting, but social accounting. 

I think in the different sectors they’re are doing different problems, but perhaps the approach that I came from, which was the incessant development where charity had come from, it had come with a very much, over time, quite academic approach to things that was all about demonstrating to the donors of money that their money was working.

And so, we had these, we very much focused on evaluation and actually these words: monitoring and evaluation, and less work on actually: Are we systematically listening to people about their lived experience?  And are we systematically living in similar ways so that we can compare and contrast performance across similar organizations?

And instead we had this… we have to look at our own piece of work and justify its existence to the person that’s giving us money. And that rather siloed this work. And it meant that we haven’t got to know the basics, about anything that scales, which is a repeatable standardized model that people can use over and over again in lots of different contexts, so that we can get comparable data, which is surprising,  given all the work that goes into doing good… to the business of giving, to not-for-profit giving.  We still haven’t built that fundamental system, which is a scalable system of doing high-quality social performance measurement.

Denver: Yeah, that’s really interesting. You know, in so many ways, the social sector has fallen into the trap of Wall Street, and that is trying to find out what we’ve done so we can report on the next quarter. And that tends to focus on outputs, and it doesn’t tend to focus on systemic change over a longer period of time, because we’re afraid that we’ll have a lousy annual report; the board will be unhappy, and the donors won’t re-up. So we have that real sense of short-term fixes to show everybody what we’re doing.

Tom: Right. I would agree.

Denver: You know, you’ve mentioned Lean Data. What exactly is it, and what are its main features?

Tom: Great. Lean Data, well, in some ways the phrase, Lean Data, came about, truth be told, because I was always surprised that when I was working as the head of this impact fund and I turned up at companies… to a company and said, “Hi, I’m the head of impact, and I’m going to do some impact measurement.” And their response would be like this… you could see the person go, “Oh God,” they switch off. “Here’s the person that’s going to come and judge me.”

And so, in some ways I was thinking we have to rebrand this. I mean, impact measurement was sort of sexy to donors, but definitely not to these frontline organizations. And so, I was thinking about, how can we make this… how can we make… and the fundamentals of it is I’m here to think about, how you’re doing vis-a-vis your customers, the things you actually want to do: Are you delivering on your purpose?

When a marketer comes into a room and says, “Here’s all the information I have about your customer,” to a CEO, they go, “You tell me more, tell me this, tell me that, tell me the other.” And it was so strange to me that when you turn up and say, “Look, I’m here to help you understand about your social performance and whether you’re delivering a social impact,” people would turn off first. So part of it was rebranding, and Lean Data was a very sexy thing. It was around the lean startup, et cetera.

But it wasn’t just a rebranding. We also wanted to take on some principles around that back to some of the points I was making earlier, that this work had been very academic. It had been very expensive. It had been slow. It had actually optimized for methodological robustness in order that people could write academic papers, not usefulness for organizations to make decisions quite rapidly.

And I was very much taken by the idea of what is enough precision for the decision. So the lean…to strip back… to this idea of what is enough precision for the decision of an organization. And can we reduce the number of people we speak to to a minimum viable data set, which was like a minimal viable product from the startup. And one or two other features like that that would focus on a faster, more rapid, more useful set of data.

Denver: Yeah, you know, that does seem to be something that plagues the entire sector. I looked at, let’s say, contributions pages online. And we ask people for so much damn information that they don’t want to give us. And that is why they’re not successful. People want to get in, get out. They want to reduce the friction. And I see this all the time when I look at these questionnaires that people would come to me, and I say, “You know what? You really only need three elements here. You don’t need the other nine.” And they’d be nice to have, but guess what? They’re not going to fill it out, you know what I mean? And you’re going to end up with nothing.

Tom: And I felt that, and this is a bit of a characterization, it doesn’t always happen. But I remember from my time working in International Development, and when we did do these big evaluations, someone would come along to some farmer to see if their program had worked, and it stopped in the middle of the day, for an hour or two hours, to ask them a 300-question survey, go away, never tell the person why they did it, maybe come back six months to ask that end line, same thing again; the person would never see this data. It would be published as a report.

And this is actually not right. We have to respect also, it shouldn’t be an extractive process to get your data to prove it to the donor. It should be about listening to someone, respecting their time, having an engaging conversation with someone about their lived experience in order to… you can be as least extractive as possible to that person, and it’s useful to them as well in terms of driving accountability to their life.

Denver: You know, I look at the sector and I say to myself, there is a mindset that this kind of data is nice to have, and in so doing, what I’m doing… I’m really fulfilling an obligation and a duty. It’s a little painful; it’s got a little bit of root canal to it. But you’re saying, no, this has to be a must have; this needs to be mission critical and something to be excited about. How do you switch that mindset?

Tom: Great question. I mean, firstly, taking it out with a bad branding that it had previously, you know, monitoring and evaluation. Monitoring is for school children and the NSA, right? No one likes to be monitored. And evaluation has its place, but not everyone needs to be evaluated either. So let’s make it more appealing. This is the data; this is an easy way to do it. Then I think, desperately trying to reduce the complexity and the costs around it. We have to make it viable for organizations.

It’s nice to have if you can raise a grant from somewhere to spend $300,000 on an evaluation; that’s not must-have if it’s beyond your reach. And then obviously we have to increase the value of it. And one of the key areas, insights I think we’ve had running this is that everyone is doing this measurement separately, right? So they might do a good piece of work, great piece of work potentially. And they find out about their impact performance; the metric X has increased by Y percent and they go, “But what do we do with that? We put it on an infographic and we tell our funders.” But it’s very hard to make management decisions around that because you haven’t any context around it.

Of course, the financial sector runs out of context when you have a sense of what returns you might expect for different types of funds, different types of investing strategies. We don’t have that for the social sector. But when we do this, what we focus on is repeatability of working with the same organizations, asking similar questions of their customers to build these benchmarks. Then you can give someone metric X has improved by Y percent. And that puts you in this context of the top quintile, the bottom quintile, the middle quintile of performance of your peers. And suddenly a light bulb happens. People go, “Oh, now I understand how I’m performing.”

And people are the same all over the world. Performance is something we all love and that competitive instinct comes out. And when people say, “How do I improve? How do I get up in that ranking? How can I improve against peers?” And then it becomes… I think once we get that into this performance mindset in these people and move it from this checkbox exercise for my donors in order to… I’ve been able to tell them my view, they’d used their money well enough, to this: “How do I perform against my peers, and how do I improve, and how do I do it this year, and how do I do it next year?” Then we’re going to enter this different mentality of this absolute business critical. And as far as I’m concerned, a bit of competition for social good would be a fantastic thing to have.

Denver: Yeah, we all know where we need to stand. Hey, I wonder how this podcast is performing against other podcasts, you know? 

 So, when I hear people talk about impact measurement, something I’ve always thought about, and I know you’ve thought about it too: Whose impact are we talking about?

Tom: Well, I actually have a bit of an issue with the word “impact” actually.

Denver: I know you do.

Tom: And it’s going to stick, and I’ll use it and it’s going to stay with us, but the impact, everyone talks about impact. And it’s a very… it’s my impact. Even the word is… you are the agent of change. I am impacting others, and philanthropists to say, “What is my impact?”  That doesn’t make you a bad person to say that, but it does make it about you and what you’re doing was actually…

Denver: I’m at the center.

Tom: Yeah. I’m at the center. It’s my impact. Actually, social value, it’s actually someone else’s life that’s experiencing that social value, but the person who has jeopardy in that impact, okay, you might give some money away as a philanthropist and that organization may not do as well as you thought they would. But that’s not the same jeopardy as: your life is not materially improved by someone who is marginalized in society. That’s completely different.

So we do need to think about this phrase impact, and actually remember that it’s not your impact. It’s someone else’s and that you’re not the center of this, and it’s someone else you should be thinking about. And that would say lead us to the conclusion which we think is:  then you have to listen to those people personally to find out whether their life has improved or not.

Denver: Yeah. That is a great distinction, but you know what, along the lines you said: sometimes you just have to learn to live with things like impact. I think I feel the same way about nonprofit. I mean, why would you describe a sector of what it’s not? It’s not a profit, it’s a nonprofit. So you say we need to change that, but then you also say, If I tried to change it too radically, nobody will know what I’m talking about. So I’ll kind of go along for a little bit longer at least. 

You’ve talked about measuring impact and you’ve talked about donors and I’ve talked about board, and I think that’s all right on. But you also make the distinction, Hey, we just can’t be looking upstream, upward mobility in terms of the people who gave us the money. What about downward accountability? Speak a little bit about that.

Tom: Well, this all leads on from those points. I have no problem with upward accountability. We should be accountable upwards clearly, but it’s a pendulum. The system has been built and continues to be built for upward accountability, because we do respond to where the money is. And I think it’s going to require a shift in mentality. Say, the real key is downward mentality: Have we benefited? How are we performing against the areas of impact judged by the people who do experience this change or not?  And how do we perform in that against peers? And prove that first before we think about how to prove things upwardly.

And one very tangible example about that, I have a bit of an issue with theories of change, which are de rigueur in our sector;  everyone has a theory of change in these things, and they’re talking to their donors about their theories of change and the people that give them money.

And of course, once you’ve set out this theory of change, you have determined what impact is important. On your flow chart or whatever you’ve described it, you put at the end: These are the outcomes that are going to change, and this is why this impact is important. And it’s so rare that you ever actually talk to someone and say,” Well, you tell me what things changed in your life and you tell me which of those things were material or not material in your life, and I’m going to build my whole performance mindset around those things.”

My KPIs are going to be determined by what you have said in reporting, instead of determined by this theory of change which we agree with our donors. And that’s a classic example of where this is a system built around upward accountability if we determine what things are important for the people whose lives do or do not change.

Denver: That’s a great point. In philanthropy I can speak to… we pretty much define the problem. And then we give everybody an opportunity or who are living that experience to tell us what they think about the problem, but it’s our problem. They’re like, “Hey, how about I get to define the problem.” You know what I mean? How about we change the containers? We might have a different conversation.”

And I had an interesting conversation with somebody recently who basically said that often, philanthropy will come into communities and try to fix what’s wrong with them. And they say that gives us a complex. How about you come in and try to support what’s really working in our community and kind of flush out all that’s bad by fixing what’s strong, as opposed to coming in saying: “You’re broken, you’re broken, you’re broken, I’m here to fix you.”  It doesn’t seem to make any sense.

You know, you take this idea of impact measurement. You bring it to another level, Tom, by focusing on impact performance or continuous impact performance. What’s the difference between impact measurement and continuous impact performance?

Tom: Yeah, a story, it’s like an analogy… sometimes we would tell there is thinking about kind of a sprinter running a race or something like that, and I’m thinking about this, the traditional methods we have, which are monitoring and evaluation. And it sounds logical.

We say monitoring, evaluation as, you know, have a plan, monitor, set out what you’re going to do, monitor whether those things were done. And then at the end, evaluate whether that was any good or not. That seems quite a logical approach. But then when you start to unpack it a bit.  Imagine like you were running a race; you’re a sprinter, right, and you said, “Right, well, I’m going to run this race, and so I’m going to check that I did the things I need to do before the race. I did my training. I turned up for the race on the right day. I wore the right running shoes and I…

Denver: I got the new sneakers. They’re so important, the new sneakers.

Tom: Yeah, “I got the new sneakers. And I didn’t false start and I ran the race.” And then at the end of it, I’ve checked all these things, great, great, great.  And then we evaluate it and we say, well, what’s the time? And you get the time, 14 seconds, right? That’s seems okay. Like that seems good, but certainly faster than I can run a hundred meters, and we do that in isolation of itself. 

And we go, “Well done! Impact ,14 seconds. Well done. That was fast.” Of course, if you take that time and you compare it to like the best times in the world, that’s hopeless, because we’d see that’s actually a fine speed, but not great speed, certainly not world-beating speed.

Denver: And all the fans have left the stadium by the time you cross the finish line. Usain Bolt has finished his TV interviews. Yeah.

Tom: Right. And this is the difference. Performance mindset says: How have I performed, and how have peers performed, and could I outperform, right? And we don’t do that very often in impact mission. When we do, I monitor the… I do the things I said I would do, and then I evaluate in isolation, each isolation. So we never know whether we could have performed better having done other things. If we’d given our money to an alternative organization, would it have performed better? If we had done different things, could we have done some things better?

And it’s only when we get benchmarks, comparability, and we do this over and over and over again, year in, year out that we change the performance mentality. And I think if we’ve got a performance mentality going in our work, we’d create high levels of impact. I mean, that’s ultimately our goal and trying to shift people from that space to this new mentality of thinking about impact performance.

Denver: Yeah, I know it’d be fair to say to you, sometimes we measure impact and then we apply it to a whole bunch of different contexts, thinking if it worked here, it’s going to work there. It’s just like  magic. And that lies a little bit more complicated than that, I guess.

Tom: Certainly. And I think about… I often take a lot of inspiration for our work, flawed though it is, from financial accounting, which is actually a system that drives decisions. It works to drive decisions. And in finance, you’d never take another company’s margin and say, Well, that’s one particular company, and this company has the same sort of business model, so I just assume the margin is the same… or pick any other financial operational metric. We’d never do that because we want to make decisions, specific decisions about performance in that company. 

“…I’m not someone that believes we’ll ever end up with a single metric to rule for all social impact. I do think it’s more varied than that.  But its fundamentals, it can be sector by sector, or can be as simple as listening to large numbers of people describing how they experienced a product or service. What things changed in their lives?… letting them describe which things are most material, and then building standardized tools, social surveys around those topics, and then repeating them over and over and over again with multiple organizations.”

Denver: All this is predicated, I guess, on this big question, and that is coming up with a standard impact measurement approach because every once in a while, it can be a little bit like the Tower of Babel out there with everybody doing things, but it’s apples to oranges, to pears. Where do we stand with that? And what is some of the work you’re doing to see that we can get a common metric that we can look at intelligently?

Tom: Yes. It’s all, as I described some of these things, the theory sounds great. The practice is complex. There’s no question about it. And I’m not someone that believes we’ll ever end up with a single metric to rule for all social impact. I do think it’s more varied than that. But its fundamentals, it can be sector by sector or can be as simple as listening to large numbers of people describing how they experienced a product or service.  What things changed in their lives?…  letting them describe which things are most material, and then building standardized tools, social surveys around those topics, and then repeating them over and over again with multiple organizations.

And a classic example of this, take off smallholder agriculture. We hear over and over again from farmers the things that matter to them are returns and whether they get a decent income, which you might call a living income,… their resilience and whether or not they’re overly stressed about the future, and whether they’re worried about their future livelihood.

Increasingly, actually things around environmental stewardship: What are they thinking about the future of that farm?  And then the fairness of their trading relationships comes up often. And we will build a tool around those topics, which we will get hundreds of organizations to participate. It’s not perfect. It will improve over time, but that kick-starts the process. And we need, sector by sector, more and more of that kind of work.

And then I think the best tools will rise to the top, just as they will elsewhere and we’ll see, and people start to say, “Yeah, we use the 60 Decibels tool,” or “We use the 59 Decibels tool,” or “…the 65 Decibels tool.” And then people will have the standardized tools they can all use.

Denver: And 60 Decibels, you’re a strategic partner with the Impact Management Project. Are they making any headway in this arena?

Tom: Well, I think what the Impact Management Project did… which was extraordinary… is to bring people together who are all talking different languages and get a group, hundreds, and I think maybe even thousands of practitioners to come together and say, “Okay, we’re going to agree to a bunch of principles around what dimensions of impact should be considered,” which actually hadn’t been done before. Because some people were saying, Well, we just got an output metric, and some people have got live reporting, and other people were going further than that.

And they talked about what impact has happened, how much impact to whom, what contribution have you made, and where an impact is. And I think that was a huge gift to the sector. So it’s almost like a checklist of how I got information about those five dimensions of impact. If I have, I’ve got a full picture of this. Of course, each sector will require data underneath each of those dimensions, and that will require building. But we use those Impact Management Project dimensions as sort of guiding principles of whether we’ve got the right data sources in any projects we do.

Denver: Yeah, it’s successive approximations. You don’t get there with a magic wand. It’s just  step by step by step. Well, occasionally with the step backwards too, I’m sure in there, because everybody thinks their metric is the metric of that. I’m sure you’ve run into that. Let me ask you about ESG investing… that’s environment, social, and governance, and how we look at it. I mean, what does it take into account, and what does it fail to address when measuring impact?

Tom: ESG is the hot topic of the moment, and it has its strong advocates and its own critiques. I think the key thing for ESG is where it’s come from. It’s always been a negative screen, and I think that’s important as the first thing. It said: Can we avoid bad things happening in the world? Whereas a lot of the stuff that we talked about from the social sector, impact investing is actually maximizing good things happening. But perhaps, so there was that distinction, and maybe you’d put that then on a spectrum of all organizations, whether they’re trying to reduce bad things or improve good things, they’re all having an impact. And to a certain degree that would be fair enough.

The only additional thing where I worry about ESG is: Who is ESG for again? And back to some of the questions you had earlier. And its current structure, ESG: Consider this environmental, social and governance approach to investing, considers those things as important and worth measuring only if the social and environmental issues that wants being measured affect the financial bottom line.

So it is a profit first, people and planet second paradigm where one would hope is that you’d have ESG actually also, not just thinking about how people and planet affects profit, but also how profit affects people and planet; as in how businesses, even if effective, how they’ll be held accountable to society and the planet, even if those areas have no material impact on their financial returns. And that isn’t the case just yet, and I worry that opens the door for more green and purpose washing.

“Inequality is one of the big social issues of our time. And if we continue structuring ESG as a financial maximization strategy, which is what it is, we will grow inequality, and we will actually be contributing to one of the worst social issues that we have.”

Denver: Yeah. And I guess the bottom line of all that is, unless it changes course, we’re going to get growing inequality.

Tom: Totally, I think this is almost being ignored by the structure of investing has tended to concentrate capital. And I think it would take something to argue against that. That actually it’s a structural way of concentrating capital in the hands of those that own it and will drive greater inequality. Inequality is one of the big social issues of our time. And if we continue structuring ESG as a financial maximization strategy, which is what it is, we will grow inequality, and we will actually be contributing to one of the worst social issues that we have.

We’re back to feudal levels of inequality in societies; median real incomes haven’t increased since the 1980s in many economically development countries. We have a crisis of inequality on our hands, just as we have a crisis of the environment on our hands. But I do think that we are not looking at the crisis of inequality in quite the same way.

Denver: Exactly. Tom, walk us through a typical 60 Decibels impact measurement effort. I mean, how do you work with a client? How much time do they have to dedicate to this? How long does it take for you on average? And what’s the end result of it all?

Tom: Getting to the nuts and bolts of it, I like it, Denver. We tried to make it as light as possible. Again, one of the things when I first started was this “B” word kept coming up. Oh, it’s burdensome. It’s burdensome. And I think, really, burdensome? Okay, so we’ve got to reduce that burden. I don’t want to hear the “B” word ever again.

So actually, an engagement starts with an organization set describing their social mission. We would say, “Great. Well, we actually have some measurement tools probably for that social mission, although if you want to ask some different questions, let us know.” And then that takes an hour or so of conversation. We design a social tool, a survey tool. The client then provides us with a list of contact details of their customers or beneficiaries, or the suppliers, et cetera. We make sure it’s randomized and do statistics to make sure we’re going to get a representative, statistically significant sample. And then we go away and off.

We have a network of thousands of researchers around the world in over 70 countries. They’re doing phone-based surveys predominantly. And we then collect the data, turn that in a matter of weeks. And then we, in turn, we do the analysis and benchmarking and produce what I think is a pleasant report in terms of beautiful reports.

I’m also very keen, that data, you know, I don’t have it… again. I’m not anti academic reports by any stretch of the imagination, but academic reports aren’t exactly engaging to read for a CEO of a business. There’s nothing wrong with making it with style as well as substance, make the report beautiful, make it engaging. Data, it can be such a lovely thing to look at. And so we tried to do some very beautiful reports which gets people excited about that data.

Denver: Well, you’ve got to do it. I mean, it’s our attention span. I mean, we have probably gone on too long here already. Our attention span is getting shorter and shorter, so unless you can make it in 300 words sometimes, they are moving on. It is just the way it goes. Two follow-up questions on that. You can benchmark so successfully because you’ve got a battery of questions, your go-to questions. Just give us one or two of those so we get an idea of the kind of question you ask.

Tom: The most basic question, which is actually trying to elicit from people to describe what outcomes changed their life is to ask someone about their quality of life. Has a product or service improved their quality of life?  And there’s a Likert score, which has five different answers from Very Much Improved to Very Much got Worse, and then a Little bit Improved, a Little bit Worse, and No Change. The key to that is then actually the second question that comes along and says: Ask someone to describe Why. And we find that combination very nice. If you jumped straight into the question of like, How has your life improved because it’s?… How has your life changed because of product X and Y?  That’s quite overwhelming.

But if you ask someone to first give a sense of it and then ask them why, you get to get data about the areas of impact that are important to that person. And then you can start to ask them which of those are most material. So that’s an example of the question of how we check actually the things that an organization might have set out to change are actually the things that someone who’s experienced that product, service, et cetera, they report as same areas of impact as material.

And then lots of other questions. I mean, we might ask questions about, if someone’s reported that safety is really important to them because they’re using a solar lantern late at night where there’s no electrification, we’ll ask some question about that perception of safety, and it might be zero to 10 scale. And then you’re collecting a lot of data about that. And you’re comparing, look, this company, the average person says it’s improved by 6.8, this one, 8.6. And we can start to rank these companies.

Denver: Mm-hmm. So I’ve gotten your report. It’s beautiful, by the way. I love the way you bound it. It’s got eight colors. It’s really entertaining and pretty insightful. How do I turn these findings into concrete business actions?

Tom: Well, that is a really good question, and I hope I’ve had reasonably smart answers for some of them. And for this one, the answer is we’re not sure yet, truthfully. This is an emerging practice. And I think that whilst the world of marketing decisions and the world of financial decisions and operational decisions has benefited from data over years and years so that people know exactly what to do or think, well, no one knows ever exactly what to do about it. We even have strategies, et cetera, the word impact management based on good impact performance data isn’t nascent.

So, hopefully the first thing you get to do is say, “Wow, I’ve never had data like this! Let’s track it again and improve it.” At times there are other things where the example of the data we get will encourage you to make a decision about your distribution channels or your after-sales service or your product design, et cetera. But we’re still working out exactly how people should make decisions with this data. But the more data, when people get used to the idea of impact performance, it’ll become second nature to make decisions like this, like it is management decisions, operational decisions, financial decisions.

Denver: Great. Tell us a little bit about the impact of COVID-19. I know you did a lot with small business owners and others. Just give us an idea of some of the insights you got from that and the work you did related to it.

Tom: We had a fair amount of work here from people in the gig economy to off-grid energy, to smaller agriculture, tracking how people’s lives were changing through COVID. Most of it, perhaps wasn’t that surprising for a lot of people, it was these challenges for smaller farmers in Kenya. The price and inputs got worse. They don’t have much power over the price they can charge, and so their margins were getting squeezed, and their lives were getting worse.

 Perhaps one of the most interesting things though, and it was the scale that was potentially interesting for people we could judge, what things might normally look like compared to what things were like, because we have this time series data during COVID. And just the sheer scale of how much harder it got was alarming.

And then we’d learn about which solutions might be working and for whom. A lot was made about agritech and again, I’m a big fan of agritech, but we saw in particular that farmers were not finding as much value through COVID in that. And there was a real divide between women and men in terms of their ability to access those sorts of things during crisis. When people were putting out messages around support, fewer of them are reaching women than men. So we definitely saw some significant divides in some of the supports provided to people during that time.

“So I think the biggest challenge is adopting a mindset that unless we listen to the people whose lives have or have not changed as a result of the things we might be doing, we’re not doing impact measurement as credibly as we ought to be. And there is no shortcut around that, unfortunately.”

Denver: So, Tom, taking some of what you’ve already said and really putting it all together and trying to crystallize it, what is the biggest challenge facing the impact measurement field, and what is the biggest opportunity that gets you up in the morning?

Tom: So I think the biggest challenge is adopting a mindset that unless we listen to the people whose lives have or have not changed as a result of the things we might be doing, we’re not doing impact measurement as credibly as we ought to be. And there is no shortcut around that, unfortunately.

Now the great opportunity is that it’s easier and easier to do that. And there are more and more organizations that can provide services to do that. And so this is a huge opportunity. This is a huge opportunity for the social sector at large to start listening in a way that it never has done before. And if we do that, then collectively, we will be much more impactful.

Denver: Yeah, it sounds to me it’s really giving up  power and letting the people that you’re serving be the ones with the power. And I think we all pay pretty good lip service to that, I don’t think it necessarily is done. We like to have the perception of giving up power, but we like to stay in control.

Tom: I think power is actually at the core of all this and this issue. And I think in the future, that power will be transferred because more people will be given a voice to exercise that power.

Denver: Finally, Tom, what’s next for 60 Decibels?

Tom: Oh, next? We’re growing as we took on a Series A fundraising last year, which was incredibly exciting to me; we’ve got some new investors. And the work we’re looking forward to doing, obviously there’s lots of operational improvements. I could talk about the techniques of impact measurement, sort of bore your listeners to tears, Denver, but there’s a lot of operational work that needs to continue and improve.

But we’re really focusing as well though on corporate supply chains. I think there’s a big opportunity with large scale corporates. We’ve taken on some great new clients from that world recently, which even five years ago, I’d been somewhat cynical about. But really thinking about their impacts in new ways and the scale that those organizations work out means that improvements there will lead to improvements of a lot of people through supply chains.

Denver: That’s exciting stuff. For listeners who want to learn more about 60 Decibels, or maybe even avail themselves of your services, tell us what your website is and what kind of information they could expect to find on it.

Tom: It is and we’ve got a great newsletter on there as well that will keep you up to date with all of our work called The Volume, and you can drop us a line at [email protected] or find us on Twitter, 60_decibels.

Denver: Fantastic. Thanks, Tom, for being here today. It was a real pleasure to have you on the program.

Tom: Thanks so much, Denver. Such a delight to be here.

Denver Frederick, Host of The Business of Giving serves as a Strategic Advisor and Executive Coach to NGO and Nonprofit CEOs and Board Chairs. His Book, The Business of Giving: The Non-Profit Leaders Guide to Transform Leadership, Philanthropy, and Organizational Success in a Changed World, will be released in the spring of 2022.

Listen to more The Business of Giving episodes for free here. Subscribe to our podcast channel on Spotify to get notified of new episodes. You can also follow us on TwitterInstagram, and on Facebook.

Share This: