Jonathan Haidt is a self-described liberal-turned-centrist. Like most centrists, his strength is that he doesn’t get swept up in tribal conflicts between conservatives proving that liberals are crybabies and liberals proving that conservatives are idiots, and instead gets swept up in tribal conflicts between centrists proving that partisans are misguided and partisans proving that centrists are wimps. The Righteous Mind is his case for centrism, wherein he sets out to explain why politics is so divisive, and what we can do about it.
Haidt’s writing sticks in your brain. He coins catchy aphorisms like “we are 90% chimp and 10% bee,” and “morality binds and blinds.” He champions allegorical heroes like Glaucon son of Ariston and Émile Durkheim, and puts their faces on flags for broad ideas like Glauconian cynicism, that humans will generally be selfish towards strangers unless their reputation is on the line, and Durkheimian Utilitarianism, that maximizing the welfare of society requires us to look at humans as homo duplex rather than homo economicus, two more terms he uses to refer to our simultaneously hivish and selfish nature. The Righteous Mind is divided into three books, each with a thesis that builds on the last. Chapters are organized thematically into subsections which often begin with a direct request that the reader should now buckle up and thereafter keep her hands and feet inside of the vehicle: “partisan readers may be able to accept my claims… in the abstract, but not when I start saying that the ‘other side’ has something useful to say about specific controversial issues. I’m willing to run this risk” he writes, with the flourish of a hypnotist assuring an audience that what he is about to do is perfectly safe, and the tactful clarity of a TSA agent about to pat you down, methodical, respectful but forceful. The book is strongly structured. Every chapter ends with a summary of its subsections, and every book ends with a summary of its chapters, bringing each thread of his main idea back together, finally, in a prolate spheroid weaved out of references to empirical experiments and their naturalistic implications. It is at times poetic: “If a factory farm finds a faster way to fatten up cattle…” I found myself delighted when a footnote would include not just a citation but a little bonus insight, hidden just off the beaten path. It’s basically a masterclass on persuasive writing. From subtle signals of his ethos, as Haidt recounts angry letters from both conservatives and liberals as evidence of his centrism, to quick sleights of hand (“I have tried to use intuitionism while writing this book… if I have failed and you have a visceral dislike of intuitionism or of me, then no amount of evidence I could present will convince you that intuitionism is correct”) which emphasize the metaphysical nature of his argument philosophy, it’s hard not to come out of the other side of this book at least a little convinced of his argument on how we make moral judgements and why that’s important to our treatment of opposing viewpoints.
The first book gives Haidt’s account of moral intuitionism, the view that moral judgements are emotional intuitions, and that they are hard to overcome using reason. Robert Zajonc demonstrates the effects of intuitions: by flashing a bad word on a screen before a good word, the good word becomes harder to categorize as good (and vice versa). This has also been shown by implicit association tests; our intuitions affect our split-second judgements even when we don’t realize it. But it’s not enough to look at psychology lab exercises — studies have also shown that juries are more likely to acquit attractive defendants, and that when beautiful people are convicted, judges give them lighter sentences. Alexander Todorov demonstrates in a separate study that initial judgements often transfer into long-term judgements even when we have plenty of time to think. Subjects give their split-second judgement of two candidates in past Senate and House of Representative elections based only on their photos, and are asked which candidate looks more competent. This initial judgement matches the actual winner about two in three times.
But we already know that voters are uninformed, (often they’ll flat-out admit it,) so what about the ones who can actually explain why they voted for their candidate? Haidt wanted to better understand the relationship between judgement and explanation, so he designed a study in which Psych 101 students were hypnotized to feel emotional reactions in response to trigger words, asked to evaluate a story, and then made to explain their evaluation. He found that subjects negatively judge a neutral action just because its description contains a negative trigger word, and they even double down on their explanations when questioned. So, as Haidt puts it, the human mind is like a rider on an elephant: the rider is able to reason about the path and somewhat guide the elephant, but once the elephant intuitively starts to lean a certain direction it’s hard to convince it otherwise. First impressions matter more than we think, at least to the extent that Psych 101 students are representative of the rest of us.
This thesis is important. It’s what makes the rest of his argument so compelling. If you’ve ever been in a disagreement with someone and asked them to logically explain a moral judgment on a social issue, it’s frustrating when they are unable to separate the concepts of morality and social convention. To borrow an example from the book, in India many people say that it is morally wrong for widows to eat fish. From our point of view we can see that this is just a social convention. There are plenty of modern societies that don’t have such a tradition, so it would be a mistake to classify it as a moral rule since those tend to be more universal, like how every society has rules against murder. When a widow eats fish it does not harm anyone, except maybe the fish, and in fact we would probably flip this judgement in the opposite direction: it’s morally wrong to limit the freedoms of widows and disallow them from eating fish.
But if everyone’s moral judgements are actually based on intuition and emotion, we might wonder whether or not there’s really a line between morality and social convention. Judgements based on social convention can certainly feel as important as moral judgements do, so this calls into question whether our enlightened separation of these concepts, our insistence that people logically explain their moral rules in terms of how they prevent harm, is how actual morality works.
To address this question Haidt brings up a few points. First, experiments have shown that moral judgements are affected by bodily disgust, for example, you can sway people to make harsher judgements on moral issues related to purity by placing them near foul smells, or having them wash their hands first. If moral judgements are primarily reasoned then this shouldn’t be possible. Second, he points out research showing that psychopaths are bad at moral judgements even though they have no deficiencies in general reasoning, and babies are good at moral judgements even though they’re bad at general reasoning. (And don’t even get me started on psychopath babies.) Haidt concludes that reasoning is unreliable and unimportant in making moral judgements.
There’s a particular sentence in this section of the book that I had to reread several times. “Anyone who values truth should stop worshipping reason.” This was where I started to understand that Haidt’s definition of reason is very different from my own. When I think of reason I think of the scientific method, in which we perform experiments to test the beliefs we hold for the exact reason that we want to eventually believe the truth. Haidt, on the other hand, is being descriptive. Imagine he had written “what most people do when you ask them to reason doesn’t get them any closer to the truth,” now this I agree with.
Haidt’s answer is that we’re actually not that bad — it’s just that reasoning isn’t meant to help us find the truth. He invokes Glaucon, the brother of Plato:
Imagine what would happen to a man who had the mythical ring of Gyges, a gold ring that makes its wearer invisible at will: “No one, it seems, would be so incorruptible that he would stay on the path of justice…” Glaucon’s thought experiment implies that people are only virtuous because they fear the consequences of getting caught — especially the damage to their reputations.
Whether or not we agree with Glaucon that this is the only driving force behind all of our actions, we might still consider Haidt’s dialed-down version of this: the goal of reasoning is not truth, but reputation. Imagine a political society like ours electing a candidate who is not very smart. Obviously it is in that candidate’s interest to appear smart, lest they hurt their reputation and lose votes. But in fact this tendency is not limited to the dumb candidate. Perkins, Faraday, and Bushey showed that although higher IQ is correlated with better everyday reasoning ability, measured in amount of arguments generated on either side of an issue, the correlation exists only with the number of arguments in agreement with the subject’s viewpoint. In other words, even high IQ subjects are not better at fairly evaluating both sides of the issue, they’re just good at coming up with arguments that fit their viewpoints. In another experiment, Philip Tetlock showed that when subjects are given a legal case and asked to infer guilt or innocence, the ones who are told they will not have to explain their reasoning make significantly more errors, suggesting that a fear of reputation damage is the main incentive for reasoning.
Tetlock also found that if a special set of conditions hold, you actually can get people to use reasoning for truth-seeking. These conditions are:
This is sort of like turning the engine in on itself. If people are always working to protect their reputations, and you set up conditions specifically so that their reputations will be boosted only if they honestly try to find the truth, you’ve in a sense used Haidt’s Glauconian cynicism as fuel for overcoming bias.
Given the above studies, I actually don’t think that emotional and intuitive biases are insurmountable. Affective priming and implicit association show us exactly what we look like in the split second before our reasoning kicks in, but whether this has a significant lasting effect is unclear. The effect of attractiveness on judges’ decisions, for example, is not observed in decisions of guilt, and is most prevalent in the price of fines for committing misdemeanors, which are admittedly large, between 2 and 3 times higher for unattractive defendants. Studies using mock juries, including the one Haidt cites, have received criticism, as mock juries are not subject to the process of jury selection that our actual justice system uses, and participants in a mock jury don’t face the real life effects of determining a defendant’s future. Additionally, Todorov’s research on Senate and House of Representatives elections show a 70% success rate, which is better than the 50% success rate you would get if you made your predictions by chance, but this isn’t too damning for our reasoning ability if you’re willing to accept that a fifth of voters don’t do research anyway. Haidt’s point here is not necessarily that emotional bias is so strong that it’s completely impossible to overcome, it’s just that there is a bias, it exists, and he wants us to remember that.
So, if our moral judgements are just emotional intuitions, and there’s nothing special about things we can argue will cause harm, then what does Haidt think is moral? Everything? Anything people agree on? Does elevating the rule “widows shalt not eat fish” to the level of “thou shalt not kill” make him a moral relativist, ready to call any set of rules morals as long as there is a societal plurality about those rules?
The way Haidt sees it, there is a difference between being a moral pluralist and being a moral relativist. A pluralist is someone who thinks morality cannot be explained with one single yardstick that we use to measure everything, for example, how much harm something causes. One yardstick is too few, but that doesn’t mean there are infinite yardsticks you can use — there is some fixed number. Haidt says this number is six. There are six moral foundations which served evolutionary purposes and elicit strong moralistic emotions, and all human societies are built on combinations of these foundations in varying degrees. They are: care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, sanctity/degradation, and liberty/oppression. He explains that these emerged from group selection on top of basic cognitive modules which we evolved through individual selection. For example, sanctity is rooted in an aversion to things that cause disgust. At the individual level, humans with an aversion to things like poop and bugs would be less likely to die of disease before having children. At the group level, societies that sanctify rules tend to have more stability than those that do not. These foundations are the subject of the second book.
Moral foundations play a role in binding groups of people together. Spiritual communities are groups that have a shared sense of sanctity. The military is efficient because troops are bound by a sense of authority and loyalty. Fighting oppression and punishing cheating is part of promoting cooperation. These are tenants of large societies because they help us overcome the difficulties of working together with people without blindly trusting them. This, Haidt says, and now he is being prescriptive, is what morality is all about: promoting societal cooperation through shared values and norms. This is why we might say that it is immoral, not just unconventional, for a widow in India to eat fish. While we might prefer to allow everyone autonomy, it is often necessary to limit freedoms in order to prevent destabilization, and you cannot help the bee by hurting the hive.
I have a hard time agreeing with Haidt here. I can see how, evolutionarily, these foundations may have been useful adaptations. We have a lot of useful triggers that don’t make logical sense, like feeling fear when we think we see a tiger in the bush without checking if it’s just a trick of the light (spoiler alert, the humans who didn’t immediately run tended to get eaten). And I can even see that these foundations can lead to a better world. People who understand sanctity and are religious donate more to charity and are more present in their communities. Military power emerging from strong coordination is sometimes the only way to stop wrongdoing. Countries with a sense of fairness and liberty have more equality, making them smarter and happier. The thing is, in all of these cases I can come up with a logical mechanism by which the foundation actually improves the world. And when I can’t come up with any ideas, even after thinking really hard about it through my non-individualistic, Haidt-informed, sociological lense, I tend to shrug and say that I guess we should make an exception, and that’s what I would do with the fish-eating widow.
This is also why I’m hesitant to agree that morality boils down to feelings. Even if people tend to make moral judgements emotionally, even if reasoning is hard and moral reasoning is suspect, even if these emotional judgements actually get us close to an approximately good society because they are the end product of group evolution, I still think that morality should be about doing the absolute best, and we can do better than approximately good, often by putting aside our feelings on something and honestly evaluating it using reason. We can take the country of India, find the hidden dials and switches on the bottom, and turn off the one that says “widow-fish-disgust,” and maybe everything else will keep working exactly the same except now widows now have slightly more freedom. Maybe we can help the bee and the hive will be fine. Of course, real life countries don’t have dials and switches, but they have undergone this kind of transformation over and over. The tradition of slavery, while appealing to our sense of authority, was forgone in favor of liberty. The tradition of spurning people with birth defects instead of helping them, while appealing to our sense of sanctity, was forgone in favor of care. Long-held traditional norms are often revealed to be wrong or unimportant, or at least not as important as what we gain by letting them go, and a good indicator of whether or not we stand to gain by letting a specific rule go is whether its effect is reducible to harming people or society. When working emotionally it’s foolish to dismiss all feelings besides those of care and harm, but that’s only because other feelings are hints that something is going to cause actual harm.
This brings us to what I consider to be the most compelling case study in the book: the story of Meiwes and Brandes, which Haidt describes with intentional detail:
On the evening of March 9, the two men made a video to prove that Brandes fully consented to what was about to happen. Brandes then took some sleeping pills and alcohol, but he was still alert when Meiwes cut off Brandes’s penis, after being unable to bite it off (as Brandes had requested). Meiwes then sautéed the penis in a frying pan with wine and garlic. Brandes took a bite of it, then went off to a bathtub to bleed to death. A few hours later Brandes was not yet dead, so Meiwes kissed him, stabbed him in the throat, and then hung the body on a meat hook to strip off the flesh. Meiwes stored the flesh in his freezer and ate it gradually over the next ten months.
Sometimes violating sanctity, degrading ourselves, is wrong in a way that isn’t explainable in terms of harm. Meiwes and Brandes both consented, and they didn’t really hurt anyone else. So if this is wrong, but not morally wrong, what kind of wrong is it? One framing of what is moral is to ask what society would you prefer to live in? Would I feel happy living in a society where autonomy is so important that it overrules my sanctimonious instincts against sexualized cannibalism? It’s an interesting and horrifying question to ask yourself. If I’m being honest, even given my strong propensity for putting more weight on the care/harm foundation, even given that I think autonomy is often good enough reason to do away with convention, even given my dislike of moral intuitionism, I still think I would pick the society without this stuff. (See, Haidt, I can be meta about intuitionism too.) My best response is that yes, sanctity does play a part in my own moral judgements, but if moral reasoning can be so biased that you can, and sometimes must, force yourself not to rely on it, then maybe moral emotions like disgust can be equally biased. I would prefer to live in the society without Meiwes and Brandes, but the absolute best society I can imagine would include a way that people like them could get what they want, just maybe not in my backyard. Still, Haidt is pointing out something important — it is unwise to ignore strong emotional reactions like this. Changes to social norms can have unexpected consequences, so when your progressive instinct kicks in, ready to override your faith in tradition, or your instinctual disgust in something, especially when that something is really really different and new, just remember that human societies have survived on tradition and disgust for thousands of years. Liberalism tends to rely on feelings of care and harm and ignore the others. As a result, it prioritizes individuals over societal dynamics, and is less emotionally appealing than philosophies which rely on all six foundations. As Haidt puts it: “liberalism — which has done so much to bring about freedom and equal opportunity … tends to overreach, change too many things too quickly, and reduce the stock of moral capital inadvertently.”
And this relates to another important point. If you’re someone without that progressive instinct, someone with a strong aversion to the unknown who sees the fragility of modern society, remember that Haidt’s thesis has two halves. Being conservative keeps our civilization stable, but a healthy society also needs to be pulled in the direction of progress. “While conservatives do a better job of preserving moral capital, they often fail to notice certain classes of victims, fail to limit the predations of certain powerful interests, and fail to see the need to change or update institutions as times change.” The intuitionist view laid out above gives conservatives a fully general response to liberal arguments. Whenever anyone tells you that some practice or norm is outdated and oppressive, you get to respond that upholding social norms is what holds society together and ignore the rest of their argument because you’re just a rider on an elephant, and so are they. But intuitionism is not a pass to ignore logic. Moral capital does not always outweigh other social gains.
Fine. I’m convinced. So what? That’s the subject of part 2.