Lesson 3.6

Understanding Moral Foundations

Dive into this topic by watching the video, followed by key explanations and exercises below.

Key Concepts:

The six underlying moral foundations shared by most human beings

The six underlying moral foundations shared by most human beings

Dr. Jonathan Haidt of NYU (author of the book The Righteous Mind), is one of the pioneers in research on “moral psychology.”

His research on Moral Foundations digs into the underlying morals behind humans’ decisions. It explains in large part why good people can disagree so viciously on things like religion and politics.

In other words, it explains why I hear my good-hearted politically conservative, Mormon and Protestant friends back home in Idaho say the same thing that my good-hearted liberal, Atheist and Agnostic friends in New York say about them: “I can’t believe someone could believe in that!”

Haidt’s research says that we can develop respect for differing viewpoints if we make the effort to unearth their moral motivations.

Few people actually think of themselves as evil. So, unless you’ve got something wrong in your brain (e.g. you’re a malignant narcissist or a psychopath), you will tend to justify your decisions to help you feel like a “good” person. Under the surface, you’ll create good reasons for what you think.

Moral Foundations theory says humans share at least six innate moral foundations that serve as the universal building blocks of morality. For the most part, evolutionary psychologists and spiritual belief systems agree on these. They are:

  • Care. Being kind and preventing harm.

  • Fairness. Justice, and not cheating people.

  • Loyalty. Patriotism and self-sacrifice for the group, not betraying the group.

  • Authority. Deference to legitimate authority for the good of the group.

  • Sanctity. Striving to be noble, clean, and not contaminated.

And some emerging schools of thought believe there may be one more foundation inherent to all (or at least most) humans:

  • Liberty: Rights, freedom, rejection of constraints and of oppression.

Studies show that we give our in-group lots of benefit of the doubt because we think “they’re good people.” We understand their underlying morals.

But we don’t afford our out-groups the same benefit of doubt. They might be “bad people,” so we justify not respecting them. (You can see this any time someone calls someone else a “liberal” or “right-wing” in a derisive way and implies through the label that the person is evil and therefore what they say is suspect.)

So, Haidt says, when we’re dealing with others, it pays to step back and identify the underlying morals they are operating from. Once we can isolate the moral values driving someone to think what they think, we can more easily respect them even if we disagree.

As a hypothetical example:

Let’s say that my buddy back home and I disagree on a charged topic—like, what to do about immigration to the US.

Now, my buddy might make some common anti-immigration arguments about crime and economic impacts. He may say that people sneaking into the US burdens the system, and breaking the law to get here is wrong.

I might make a pro-immigration argument, saying it’s wrong to prevent people from living where they want to live. I may say that our immigration laws are unnecessarily cruel. I may point out that my best friend, my girlfriend, and my roommate are all immigrants, and that they make my life and this country better.

Underneath, what each of us is really doing is using post-hoc justifications to back up a moral intuition that we value most. And so as the argument continues, we’ll trot out statistics or stories that confirm our biases. We’ll tune out inconvenient evidence that calls our particular stance into question. The fact that my Brazilian best friend pays hella taxes, and my Guatemalan girlfriend makes everyone around her a better person might be dismissed by my buddy’s anecdote about a foreign gang member shooting someone in Texas.

We may not even realize it, but we’re not respecting or considering each other’s arguments while we’re so busy defending our own. This conversation likely won’t go anywhere, and is likely to leave us disliking each other.

But say we forced ourselves to dig out the moral motivations behind our immigration stances. We might end up unearthing this:

My buddy values Fairness and Authority above all else. So he thinks it’s not fair that some people can break the law and get away with it (entering the country illegally). Even if the law is a little cruel, breaking the law is a betrayal to society. And he thinks it’s not cool to disrespect the Authority of a country by breaking it’s laws, even if the law is not cool. Finally, my buddy might be worried about the Sanctity of the country. Letting in anyone means we might let some bad guys in, too. It’s good to not risk contaminating the swimming pool, he’d say.

Once we unearth this, I can acknowledge that my buddy’s motivations are good, even if I disagree with his conclusions. After all, I can get down with Fairness and Authority too. Even though those aren’t my primary morals, I understand that my buddy is coming from a place of trying to do the right thing.

In contrast, I can help him see how I value Care and kindness above all else. If he’s listening, he’ll agree that that’s a good thing, too. I can explain how I think we should treat people like they’re valuable no matter where they were born. This explains why I think restricting immigration the way we do is unkind. And he might be surprised to discover that I also value Fairness. The way I see Fairness in the case of immigration is that it’s not fair to tell one human they can live here and another they can’t. We don’t choose where we were born, and I think it’s unfair to restrict someone for that.

So we both value Fairness, we just apply it in different ways.

Once we unearth these moral foundations, even if we still don’t agree on a conclusion, we have earned respect each other’s viewpoint. I see my buddy as a good person. He has good moral motivations behind his arguments. And he sees the same in me.

This means we might be able to have a more productive conversation about what to do. We might just be able to employ the next three parts of IH and get somewhere together.

As Dr. Haidt summed it up in his first TED talk “A lot of the problems we have to solve are problems that require us to change other people. And if you want to change other people, a much better way to do it is to first understand who we are—understand our moral psychology, understand that we all think we’re right—and then step out, even if it’s just for a moment, step out of the moral matrix, just try to see it as a struggle playing out, in which everybody does think they’re right, and everybody, at least, has some reasons—even if you disagree with them—everybody has some reasons for what they’re doing.”

Practice This:

  • Think about someone who holds a point of view that you vehemently disagree with.

  1. Ask yourself, do you think they think of themselves as a good person? (Unless they are literally a psychopath, the answer is probably yes, but it’s important to admit this before going to the next step.)

  2. Summarize their argument in an objective way, without adding any judgment or adjectives that inject your opinion of the argument. E.g. “The argument is that we should lower our sales commissions.” NOT: “The argument is that we should screw our salespeople out of their full commission.”

  3. Now see if you can trace the roots of their argument down to one or more of the six Moral Foundations. E.g. The argument that we should lower our sales commissions is rooted in the moral foundation of Loyalty and Care, because doing so will help the broader company with its financial stability, and therefore help us take more care of other employees.

  4. Now think about how you can determine whether this is actually how the person thinks. (I.e ask them!) In some cases, you’ll be wrong about this. (E.g. in the example above it’s possible that whoever’s making the argument instead justifies the change in commissions from an Authority standpoint, or that it’s the right thing to do because the boss says so. In that case, this discussion needs to be had with the boss!)

  5. After getting at the moral foundation of the argument, you’re now ready to both trust the person’s intentions and therefore have a more productive debate with them, and also to share your own opinions in terms of the moral foundations they care about. E.g. I think that we should not lower sales commissions, because we ought to take Care of our sales people, and because Fair about their compensation for the work they do is being Loyal to our original company mission.

Making this little exercise a habit before you engage with someone on a thorny issue is extremely helpful for practicing your own intellectual humility, and for making your own cases for your ideas in terms that will mean something to people. Try it!