The following is a conversation between Cass Sunstein, Co-Author of Noise: A Flaw in Human Judgment, and Denver Frederick, the Host of The Business of Giving.

Cass Sunstein, Professor at Harvard Law School and Co-author of Noise: A Flaw in Human Judgment

Denver: If we could only remove bias from our decision-making, both conscious and unconscious bias, the decisions we make would be so much better. And that’s true… up to a point. There is something else, however, we need to consider, and that is noise. And wherever there’s a judgment being made, there is noise, and more of it than we think. That is a topic of an absolutely fascinating new book called Noise: A Flaw in Human Judgment.

And it’s a pleasure to have with us its co-author, Harvard Law School professor, who’s currently on leave to work in the Biden administration, Cass Sunstein

Welcome to The Business of Giving, Cass! 

Cass: Thank you so much. An honor to be here. 

Denver: So now that we have people wondering, tell us — What is Noise?

 Cass: So in the last 30-, 40 years, we’ve attended a lot to bias, and you can think of bias as a predictable error. So if you get on the scale in the morning, and it always shows you as five pounds overweight — that’s my scale, by the way — that’s a bias. And we kind of know what to do about bias. Sometimes it could be racial bias. It could be optimism. It could be inertia. It could be sex-based bias. There are biases out there. 

A noisy scale is one where you get on it on a Tuesday and it shows you as five pounds overweight, and on Wednesday, it shows you as five pounds underweight compared to your real weight. It’s variance. And so, a noisy judge could be an individual who’s just all over the place, depending on what day of the week it is, or whether it’s the morning or afternoon. A noisy organization is one whose different members with decision-making authority reach very different decisions from one another. 

And there’s been a lot of attention on bias. It has charisma, bias does. Noise doesn’t have charisma, but it’s often the villain of the piece. 

Denver: So as I think about the pattern you just described, when I think of bias, I see the errors all in one quadrant, but I guess with noise, they’re all over the place. 

Cass: Right. So suppose there’s a hospital, which is just overdiagnosing all the time and having too many tests and too many procedures. There are such hospitals, and the problem of overdiagnosis in medicine is real. That would be one. Let’s call that the “bias” hospital. 

The noisy hospital would be one where an individual doctor, let’s say, over-diagnoses in the morning and under-diagnoses in the afternoon when she’s tired. Or in the same hospital, you go to one cardiologist who says, “I think we should wait and see how things go. I’m not really worried about you.” And the other cardiologist in the same hospital says, “I’m concerned. We’ve got to do a lot of tests and maybe have surgery.” That’s a noisy hospital. 

Denver: That’s for sure. So assuming that noise has been with us all along, what was the catalyst for you and your co-authors to finally detect it and give it the name of “noise”?

Cass: The genesis lay in some work principally done by the first author, Kahneman with an insurance company, in which he did a noise audit of how they set premiums– different people who work for the company. And it turned out that in highly realistic scenarios, the different people who work at the company set very different premia for insurance. Now, that’s a problem because if you set a premium that’s too high, people are going to say, “I’m going to go to another company.” And if you set a premium that’s too low, people are going to say, “That’s great,” and the company is going to lose money. So the fact that there was a high degree of noise in the underwriters’ choices and judgments was itself startling. 

But more startling than that was, we asked the company’s top people how much noise they expected, and it was way lower than the actual noise observed. And then we asked them how much money do you think they’re losing as a result, and it’s probably in the billions annually. And that seemed like a kind of universe in a grain of sand, which opened up a whole new area for inquiry– whether it involves giving, or law, or organizational decisions, or medicine, or fingerprint reading, or reading of x-rays. In all of these areas, every area we looked at, we found noise and more of it than we expected. 

Denver: Lots of noise. I saw in the insurance, they were expecting 10% and it was 50%, 55%. It’s huge variance. 

Well, this can really adversely impact somebody’s life, this kind of noise, because the decisions will change the trajectory of an individual’s life, and I know the moral outrage that we feel about bias. We get mad. We get angry about it. Do you sense the same moral outrage about noise as we have about bias?

Cass: Not yet. So we hope to spur moral outrage about bias and noise both: where bias, we have a lot to say about it, and we want people to try to correct it; but noise, we want to put a bright spotlight on and say, “It’s a scandal.” 

So I’ll give an example where it’s a scandal. In the criminal justice system, often the most important thing in determining the sentence is not the defendant, who’s, let’s say, been convicted of shoplifting or assault or a drug offense. The most important moment is who is chosen as the judge. It’s a lottery. So you go to one judge, and the judge says, “Drug offense – really bad. Three years in jail.” You go to another judge who says, “Drug offense – well, that’s life. Probation.” That’s scandalous. That’s outrageous. The idea that someone’s entire life trajectory can be determined by the lottery of who happens to be chosen as the judge — that’s really bad. 

And if you’re a customer dealing, let’s say, with a company who sold you a defective product, if they’re noisy in their response to customer complaints such that one will say “We’re going to give you a new product. I’m so sorry.”; another will say “We’ll fix your product.”; and yet a third will say “Sorry. These things happen.” 

Denver: Let me go back to medicine for a minute, because what you said a moment ago, it was really quite disturbing in a lot of ways, and I want to ask you something about… Let me ask you this: What do you think about getting a second or a third opinion? Other than that potentially being very expensive, the ultimate decision-maker in this is the patient. How are they going to feel if they get three completely different recommendations? Is that going to create anxiety? Or how do you think about those second and third opinions?

Cass: It’s a completely great question, and it’s actually a deep question about the human mind. So if you get an investment advisor who tells you “The stock market’s going to be great this year; put a lot more money into the stock market,” you might feel excited, you might feel informed, and you might just do it. If someone like the authors of the book Noise says “Go ask two other people because there might be noise with respect to investment advisors. In fact, we know there is,” and then you’d go to someone else who says “Go half bonds, half equities,” and then the third says “Stay out of the stock market this year,” you might think, “Oh man, what am I supposed to do?” And that’s bothersome. 

Just as in medicine, some people are reluctant to get a second opinion just because of the dissonance that comes if the doctor they trust is contradicted by another doctor who seems equally competent. So noise can be discomfiting to those who hear it, where it isn’t a loud volume but is just a variance across people who are supposed to be the same. That is an obstacle to getting less noise in the world. 

If you see a biased, let’s say, doctor who is kind of famously too optimistic, and you’re told that… you didn’t know, and then you go to get a second opinion — that’s okay. You’ve learned that the first one was biased. Or if there’s an investment advisor… But noise is a challenge for those who have the most to gain by reducing it. 

Noise is part of the human condition, which is not readily audible. But also, it shows you that life often has more uncertainty than we think, or that there’s more pervasive error than we want to believe in organizations. 

Denver: That’s really interesting because we do like certainty. And I even look at sometimes pundits on TV predicting what’s going to happen, and they’re completely wrong, but they come back again with the same degree of certainty because the audience wants that certainty. Even if they have a bad track record, this person is telling you what’s going to happen, and there’s some kind of comfort in that for whatever reason. 

Cass: Completely. So bias not only is intuitive, meaning we get it; we don’t like it, but also once one sees bias in others or in oneself, correcting it is pleasing. It might be a little embarrassing to see it and correct it, but once you’ve done it, you feel really great. Noise is part of the human condition, which is not readily audible, let’s say. I was going to say visible. But also, it shows you that life often has more uncertainty than we think, or that there’s more pervasive error than we want to believe in organizations. 

So I’ll give you a little example. Before I joined the government, I was a professor, and I’ve done that for a lot of years. I know that if I grade a paper on a morning when I’m really in a good mood and full of energy and enthusiasm, the student’s prospects of getting a good grade are higher than if I graded that paper at night, same paper. And that’s a terrible thought because students’ futures often depend on grading. 

I also know that If I’m not really careful about how I grade, my grading is going to diverge from that as other professors who are exactly like me, except that they were in a different mood, or they just grade differently, and that produces unfairness to students. It also can produce mistakes, which can have consequences. And while grading isn’t the biggest deal in the world, this is replicated in the domain of human judgment broadly. 

Denver: So, Cass, if I am in your class, and there are three essays. And I write my first essay, and you grade my first essay, and you’ve given me a D. Is that going to affect how you’re going to grade my second essay? 

Cass: If I know you had a D… And I try actually not to know how students did on the first essay. I try to have ignorance about that so I don’t get biased. So an early bad grade can anchor a grader such that a teacher will tend to keep giving low grades, having been anchored in that way. Now, that is a reduction of noise potentially, but an increase in bias that comes from this anchoring. And that’s why independent judgments in one’s own mind are generally a good idea. 

Denver: You know, Cass, something I think that we believe we’re all pretty good at, and that would be conducting a job interview. I’ve done a lot of them. I’ve got a great track record. I’ve got a pretty good gut, I can tell, or at least so I think. So what kind of noise takes place in a job interview? And maybe what are some of the things we can do to lessen that level of noise?

Cass: Job interviews are very good at predicting how much the interviewer is going to like the interviewee. So if you interview someone who’s going to work with you, when you think “I really like this person,” chances are you’re really going to like the person. But in other terms of job performance, job interviews are really poor predictors, even on the part of experienced interviewers. They are much less good typically than the recommendations and records and hard evidence. So that suggests they’re not good predictors. 

They’re also really noisy in the sense that if you’ve got two people who are really experienced interviewers in the same place, they might well have a lot of difference in how they value the prospects of the candidate. And this might be because of — let’s give three reasons, shall we? It’s a nice case study in the dynamics of noise. It might be what we call “occasion noise,” meaning that one person is just a tougher interviewer than another and is less bullish on the prospects that people will do well. So you could see this in medicine where one person overdiagnoses and the other doesn’t. You can see it in criminal justice where one judge is severe and another isn’t. So that’s just individual difference in severity. 

Another possibility is that one interviewer is just in a really good mood because her favorite football team won, or because the weather’s really nice in her community, and the other interview is in a really sour mood. So that could be what we call occasion noise that happens within a person. Your mood often determines how you deal with a job candidate even though that will lead to noise and a kind of bias, but a bias that’s variable, so the variability of the bias produces noise. The other kind, which isn’t about severity and leniency, or, let’s say, mood and circumstance is what we call “pattern noise,” and it’s the most subtle. It took me a while to get it through my head, but it’s the most important, we think, and also the most interesting. 

So you might have an interviewer who really values what university the person attended and thinks if the person went to an elite university, “That’s phenomenal. I’m going to probably hire that person.” And the other interviewer thinks, “I don’t care that much if it’s an elite university.” Or you might have someone who really cares about length of experience and another interviewer who doesn’t. So you could have an interviewer that says, “If you went to Princeton, that’s a big plus for me. If you were in the workforce for seven years, I don’t care.” Another interviewer might say, “Princeton’s good, but not that big a deal. There are plenty of people who are fantastic who didn’t go to a place like Princeton. But If you were in the workforce for seven years, I’m going to be for you.” 

And that can produce a ton of noise across people who are supposed to be similar. We call it pattern noise, and everywhere we look, we find pattern noise. That’s a problem. It suggests that to get interviews right, there are about four things that are good– and if you like, we can talk about them. But the general, unstructured interviewer as a predictor of performance, your gut may tell you really know what you’re doing, but at least my gut is wrong on that one. 

Denver: Well, tell us about a couple of those things that are good in a job interview.

Cass: So one thing you can do in an interview is just rely on three people rather than one… or maybe six people rather than one. If you take the average or the majority view, the noise will be decreased, and it works. Just like in medicine, if you got a second and third and fourth opinion, that reduces noise. It’s labor-intensive to have a bunch of people, but it works. It reduces noise. 

Another thing you can do is to try to delay your intuition with respect to whether to hire the person. Meaning, don’t take your intuition away, don’t take your gut away, but don’t let it come into play too quick. So in an interview, you might have five criteria, let’s say, for hiring, a kind of checklist, where one is experience; another is recommendations; a third is maybe academic achievement, if that’s relevant; and a fourth might be something like diligence– I’m just making those up– and you rank the person on, let’s say, a 0-10 scale with respect to each of those. That prevents you from letting your intuition run away with you, and it’s more structured. 

Also, to be very careful about the risks that an immediate rapport, let’s say, in the first 30 seconds will actually make the whole interview sing and an immediate, let’s say, didn’t-get-off-on-the-right-foot will sour everything. That does happen. And if people are liking each other from the beginning, it may be that the interviewer’s going to hire that person even though it was just a coincidence, or they had the same first name, or they went to the same school or something, and that can screw everything up. So, delay your intuition, and structure the ingredients that you’re looking at in a pretty systematic way. 

Denver: That’s great advice. I got to tell you that a lot happens in those first three minutes before the interview even formally begins, and you’re trying to either confirm what you felt in that first 180 seconds, and that is not the way to do it. And I guess what you’re also saying is you disaggregate — don’t make this holistic decision but really break it into very individual decisions, and then look at them at the end collectively.

Cass: One of the great advances in noise reduction in medicine is when a baby is born, there’s something called an Apgar score where they look at five things and rate them on a scale of one to four. And the Apgar score turns out to reduce error extremely dramatically with respect to judgments about how healthy a baby is, and that’s really important. 

And it’s also, just what you’re describing, it’s a matter of taking the holistic judgment — is the baby healthy? — and just splitting it up into concrete things which you can give numbers to, and then you’re much more likely to make a good judgment about whether the baby’s going to do fine.

A good way to start a meeting is for the boss to say, “I really am not sure what I think, and I want to get as many views out there as possible, and I want them to be independent.”…Or you could just create a culture by which independent thought is strongly encouraged, and this seems a little cliched, but it really is a way to make organizations work better.

Denver: Interesting. So, taking everything that we’ve discussed so far, can you extrapolate on that, Cass, and apply it to meetings? How can we conduct better and more effective meetings? 

Cass: What a fantastic question! So, let’s focus on when meetings sometimes go wrong and are in a way that I’ll describe noisy. 

Suppose the boss in a meeting says, “I tend to think this. What do you all think?” The high likelihood is that the group is going to coalesce around and support the boss. In fact, when I worked in the government in a prior job, I was confirmed by the Senate at a certain date, and after I was confirmed by the Senate and I was the boss of an organization, I found that whatever I said in the first 60 seconds of the meeting was taken as authoritative and brilliant, even though I had no idea what I was talking about. Because people thought, “He was confirmed by the Senate. He’s the boss. He probably knows what he’s doing. And even if he doesn’t, we should act as if he does.” 

There’s noise there in the sense that that meeting had a cloud of possibilities. It could have gone in a number of different directions, and it happened to go in the direction that suited the boss’ inclinations because the boss happened to speak out first, and everyone just all aligned. But if the boss had decided to call on, let’s say, his or her most trusted advisor and say “What do you think?” and the advisor had thought something very different from the boss’ inclination, then that meeting could have gone in a radically different direction, and now we’re talking about noise. 

 A good way to start a meeting is for the boss to say, “I really am not sure what I think, and I want to get as many views out there as possible, and I want them to be independent.” And then, there are things you can do that might be a little hokey where you say, “I want everyone to write down their views before anyone speaks. I want that to be a way that people get clarity in their own minds.” Or you could just create a culture by which independent thought is strongly encouraged, and this seems a little cliched, but it really is a way to make organizations work better, and I’ve seen it in real-time. 

I can say that President Obama, for whom I worked for four years, whether or not you like his politics, he was a master at trying to elicit the views of everyone in the room, precisely in order to prevent a kind of rush to the view of the most important or most articulate or most confident person who spoke early. So if you had a lot of independent views out there, that will give the group clarity that going with the most powerful person or a particular thing quickly might lead to a mistake. 

Denver: So aside from delaying intuition, the boss has got to delay speaking up at these meetings. I can certainly see that. I don’t think bosses realize the power that they have with their words. I remember Eric Schmidt of Google said at a meeting somebody would make the presentation and he would just say, “That’s interesting.” That’s all he would say, and three task forces would have formed after the meeting to start on that when he said these little words. So, talk a little bit about a noise audit. What is that, and how does it work in an organization? 

Cass: So what we have seen in various fields, noise audits that haven’t been so named, here’s one way to do a noise audit. Take a bunch of doctors; present them with a realistic scenario about, let’s say, some sore throat issue.  Give them as much data as you have, and ask them: What’s your judgment about the diagnosis? If they all say the same thing, then you’re not observing noise. If you see major differences, then there’s noise, and you can probably quantify it. 

In the legal system, there have been noise audits in the sense that judges have been presented with scenarios involving someone who did something against the law, and judges were asked: What’s your sentence? They were given a lot of details about the offender and the offense. And then you can tell whether there’s noise, and there’s a lot of noise there. In the insurance company example, we did a noise audit in the sense that we asked a bunch of underwriters what premium would they set, given some realistic facts about the person seeking insurance coverage, and we found a lot of noise. 

So you can do this within many organizations. And it’s pretty easy. You can make it formal or informal, and chances are you’re going to discover there’s a surprisingly high amount of noise, and then the question is on the table: What are you going to do about it? 

Disagreement is not unwanted…but unwanted variability in professional judgment is always bad.

Denver: It’s such a fascinating concept. If I asked you “Is noise good or bad?”, how would you respond? 

Cass: Well, noise in the sense that a lot of volume at, let’s say, a Taylor Swift concert is a really good thing, I think. I’m a fan of Miss Swift, and I like it and when she plays at high volume a lot. 

Noise in the sense that we’re describing is, by definition, bad, and that’s just a matter of definition. So our definition of noise is unwanted variability in judgment, and if it’s unwanted variability in judgment, it’s bad. So if you have people who are trying to fix something that’s broken and they have different approaches, unless they are different approaches that both work– if they are different approaches– you have a signal that something’s gone wrong. And if you have lawyers who give you radically different advice about whether you should sue in the same circumstance, that’s unwanted variability.

Disagreement is not unwanted. If people disagree about Taylor Swift– well, I happen not to love that because I think everyone should think she’s great– but a different divergence in taste makes the world go round, so that’s completely fine. But unwanted variability in professional judgment is always bad. 

Denver: No question.

Cass: Whether it’s worthwhile to eliminate it is a different question. It could be too costly to eliminate it. 

Denver: And also, maybe I’m wrong on this, but you don’t want to suppress noise in an organization either. You want the noise to get out there at some point so you can identify the noise and then coalesce around a way in which to go. Would that be a fair statement? 

Cass: Well, yes. We have to be clear what we mean by noise. Disagreement is not noise. So if you have an organization which is deciding whether to open a plant in some other city, and people have different judgments that are going to be inputs into the discussion, that’s part of deliberative process, and that’s not noise as we understand it. 

Noise is unwanted variability in judgment. And that could be the case if you go to an engineer and say, “How are we going to solve this problem?” And one says one thing, and another says another thing in circumstances in which they can’t both be right, and that’s unwanted. So disagreement is not noise. 

Certain people just are really good forecasters. They tend to be smarter than the average, but that’s not their only comparative advantage… What superforecasters tend to do is they think in terms of probabilities rather than absolutes.

Denver: You have said, Cass, that superforecasters are less noisy, and they don’t show the variability that the rest of us generally do. What is it that they’re doing differently? 

Cass: Thank you for that. So let’s just note that forecasting is a noisy business. If you ask a bunch of people what’s going to happen with respect to anything under the sun, you’re going to see noise even among people who are supposed to be experts. Philip Tetlock and collaborators have discovered that certain people just are really good forecasters. They tend to be smarter than the average, but that’s not their only comparative advantage. There are a lot of smart people who are horrible forecasters. 

What superforecasters tend to do is they think in terms of probabilities rather than absolutes. And I think I should confess that having worked on the book, this is one of the things I’ve learned to do in my own professional and personal life. They don’t ask:  Is this going to happen? They ask – What’s the probability this is going to happen? So if they think it’s going to happen, they might say “It’s 55% likely to happen,” or they might say “It’s 95% likely to happen.” And that difference — that’s a really important difference. Whereas non-superforecasters, let’s say, the rest of us, we tend to think “This is going to happen. The Democrats are going to lose the House,” rather than thinking “If we think that’s going to happen, the Democrats are 60% likely to lose the house.” So that’s one thing they do. 

They also tend to break up problems into component parts. So if they’re asked “Is this going to happen?” They don’t have a holistic judgment, probably, given how life is. They think: What are the five things that would have to happen in order for that thing to happen? And if those five things include two that are improbable in the extreme, even though they contain three that are highly likely… and all five have to happen, they’ll think it’s really unlikely to happen. 

So, if we’re thinking about how likely is it that if we move to a different city, we’re going to enjoy our new life; or how likely is it that if we take a new job, it’s going to turn out great; or how likely is it that something big is going to happen involving China or Russia in the next six months? If we think: what are the pieces of this problem that would have to come through to fruition in order for the ultimate prediction to be justified one way or the other — superforecasters are really good at doing that.

Denver: And just what you said about probabilities, they really embrace the gray. We live in such a binary world of good or bad, yes or no, black or white, but the real genius is found in that gray. And when you do 70% or 62% or something, you’re beginning to get into a discussion that is so much more thoughtful than just putting the stake in the ground. And I can really see that.

Cass: With superforecasters, what the data is increasingly demonstrating– this is fascinating to those who discovered the existence of superforecasters– is they’re less noisy. So if you find five superforecasters, they’re not going to be super noisy. They’re going to be… they could be called super quiet. Superforecasters are super quiet. 

Algorithms spit out the same answer every time, and human beings don’t. So for some problems, a human being will vary depending on mood and weather and how the family’s going. Algorithm isn’t going to vary in that way. So algorithms are noiseless.

Denver: There you go. Well, more and more decisions are being made or at least are being supported by algorithms. How will the use of algorithms impact noise levels? 

Cass: An algorithm drives noise to zero. So whether an algorithm is a good thing, all things considered, is another question. But it’s good in this sense: If you have an algorithm to decide who gets into, let’s say, university or an algorithm to decide whether you get surgery, finding a certain condition, or an algorithm to decide who gets bail — there’s not going to be any noise. And that’s just by definition. Algorithms spit out the same answer every time, and human beings don’t. So for some problems, a human being will vary depending on mood and weather and how the family’s going. Algorithm isn’t going to vary in that way. So algorithms are noiseless. 

And if this seems scary or puzzling, think of a rule, such a rule that would say something like: If you’re over 21 years old, you can buy alcohol; if you’re under 21 years old, you can’t buy alcohol. That’s a noiseless process. It’s not like a standard which would say you can buy alcohol if you can handle it, or you can buy alcohol depending on four things, which the decision-maker gets to balance. Then you’re going to have a lot of noise out there. But with a rule, no noise, and algorithms are basically applied rules. 

Denver: You mentioned cash bail a moment ago. Algorithms have had a profound impact on that, haven’t they? 

Cass: Well, there are studies that find that algorithms for giving people bail or not greatly out-predict human judges in the sense that if we used an algorithm rather than a judge, we could keep the same number of people in prison and reduce crime rates dramatically. Or we could keep the crime rate as it is and greatly reduce the prison population. So algorithms are much, much better than human judges in predicting flight risk, which is associated with criminal activity. 

So whatever dimension we care about, the algorithm can help us do better. And one reason is that the algorithm isn’t noisy. Human judges are really noisy

Denver: So if we take these things and things such as the average of many independent judgments will bring the noise level down, et cetera, how will all of this impact leadership and the way organizations are structured as we look to the next decade or two?

Cass: So we’re hopeful. Danny Kahneman, first author of the book, often says and said this during the writing, that this book is premature in the sense that we’ve discovered, I think– and I give my co-authors the credit for this 100%– I got to be along for the ride– a continent that hasn’t been visited. It’s the continent of noise. 

Bias – we’ve all been there. The thought is that now we have some clarity about the category. We’re going to see an outpouring of work by organizations in reducing noise, and the discoveries are going to be made by people, researchers, and practitioners that the authors of this book could not possibly have anticipated. But a lot of the remedies will be in the general domain of what we call “decision hygiene.” 

And the reason we call it decision hygiene is if you have, let’s say, a strep throat, you know the medicine. You take the strep throat, maybe an antibiotic. If there’s a bias, we basically know what the remedy is. It might not work but we know what it is. Hygiene is something that you do when you don’t know anything other than it’s going to work against a number of things. So if you wash your hands a lot, there are germs that you will avoid, and it’s a kind of all-purpose protection against what might go wrong — not perfect but positive. Hygiene is different from a medication. 

Decision hygiene means there are about six things an organization can do to reduce noise, and they will also — those things — will reduce bias in the process. So we’ve already explored a few. One is to use a number of independent judgments rather than having one person drive the outcome. Second thing you can do is break up a problem into component parts, structuring judgments rather than having a holistic judgment. Another thing you can do is have meetings be along the lines we discussed. Another thing you can do is delay your intuition. Don’t let your intuition get ahold of you from the beginning. 

And there are some cases where some information that you want to withhold from decision-makers on the ground that it can screw them up. So we know for fingerprint examiners, if they get some information that may bias their judgment to find a match or not a match, just ask them to do their fingerprint expertise. And for organizations, generally, you can do that. 

When I worked in the government under President Obama, there were two organizations I really particularly loved. One was– I loved them all, but there are two, I particularly– 

Denver: There you go. That’s so very political of you. 

Cass: One was the Council of Economic Advisors, and the other was the Office of Science and Technology Policy. And the reason I love them is that they would ask, let’s say, the scientists are asking whether a chemical is dangerous, they would ask that question in a way that was in, kind of, let’s call it “epistemic isolation.” They either didn’t know or bracketed every other question — politics, economics, what the interest groups wanted. They didn’t think about that, or maybe they didn’t even know what the President wanted. They didn’t know. They didn’t think about it. They thought: Is this chemical dangerous? That’s our question.

And that was epistemic ignorance, meaning they had ignorance of all sorts of knowledge, which might, in some sense, be relevant to the decision. And in an organization to have people asking questions in ways that avoid biased-ness from other things — that’s a form of decision hygiene.

Denver: It’s almost like having an away-team in your organization that’s not immersed… but is over here, and they come up with recommendations. That’s great. 

Well, you have authored so many books, Cass, and you’ve done some with other people like Nudge, but I’m sure you have a system, and you have a process that you follow. I’d be so curious what it was like writing a book with two other co-authors, those being Daniel Kahneman and Olivier Sibony.

Cass: This book could really, genuinely… there could be a movie about the writing of this book. I can’t say that’s true about any other book I was involved in. It couldn’t be a movie. It could maybe be an essay.

And the reason is that Danny Kahneman, the first author is larger than life, is in his 80s, a fount of creativity and originality. I’ve never seen anything like it. Part of his creativity and originality takes the following form. He will have a phenomenal idea on Monday. Olivier and I will think: “That’s amazing! Let’s all write it up.”  Then on Tuesday at 3:00 AM, he will write us a note saying, “That idea I had – it’s terrible.–” 

Denver: Trash. 

Cass: “Imagine how I had it!” or “Why we didn’t see the obvious flaw in it!” Then at 4:30 AM, he’ll write a note saying, “Wait. I might see a solution that can rescue this really bad idea.” And then at 5:45 AM, we’ll get a note saying, “The idea was terrible. The rescue operation succeeded. Here’s a new version of the idea, very different, and I think it’s going to work.” And then we’ll meet maybe at 10 in the morning and have a plan to write it up, and we will write it up, and it will be, by my lights at least, really good. And then three months later, we get a note at 3:00 in the morning from Danny saying, “How could I have been so stupid? This has an obvious flaw.” 

So this book isn’t a short book. It’s over 400 pages. 

Denver: Yes, it’s a great book. 

Cass: Thank you for that. I think we have probably at least 4,000 pages that were either excised from the book on the ground that they weren’t good enough or were precursors of what’s in the book. I would guess that 4,000-page outtake, even if our hourly rate is really low, it’s more than $4,000 … for those 4,000 pages. They were never made. 

So there’s a lot of drama in the writing of this book in the sense of Danny is concerned that we weren’t getting it right. And Olivier and I, we provided our own form of drama at least, and I am speaking for myself. I wrote many pages about law and policy that I worked so hard on and that my coauthors at multiple stages were– they revised and edited and made much better, but they thought it should go in the book. But at the later stages, I thought, “You know, this isn’t a book about law. It’s too much detail, too much information.” And so, with not literal tears but a form of wry despair, I thought, “That’s leaving the book.” 

Denver: Well, as I said, the book is great, but I do have a hunch that the movie about writing the book will be even better. It sounds great. Any other way your decision-making has changed as a result of writing this book, other than thinking in probabilities?

Cass: Completely. So now I’m working in the government and in the stages where I was finishing the book, I was involved in many professional organizations. The thought that if I had a leadership role or have a leadership role to be– as quiet as I can be — that’s fundamental. The thought to try to find people who are part of an organization who maybe haven’t spoken but might well have something to say that no one else is thinking — that’s in the book Noise, and that’s something that I try to practice. If there’s someone in the room, maybe it’s the youngest person, maybe it’s the oldest person, maybe it’s the most introverted person, that person probably knows something that no one else knows; so to try to get that person to talk.

Also, to think at every stage of a decision until it’s public, to think that it’s potentially revisable, partly because our organization might be noisy, and there might be something that’s going to come up that will stabilize our decision in a different way — that is akin to, let’s say, what a superforecaster would do if there were a range of them and they all got stabilized. 

Denver: Well, just having worked with Daniel Kahneman, I think you know that every idea is revisable. And many, many times. 

The title of the book is Noise: A Flaw in Human Judgment, and you can learn more about it by visiting the website Really some truly original concepts that will help lead to better and more equitable decision-making; so go pick up a copy. 

Thanks so much for taking the time to be here today, Cass. It was such a pleasure to have you on the program

Cass: Great. Thank you. Great pleasure for me.

Listen to more The Business of Giving episodes for free here. Subscribe to our podcast channel on Spotify to get notified of new episodes. You can also follow us on TwitterInstagram, and on Facebook.

Share This: