READING SUBTLY

This
was the domain of my Blogger site from 2009 to 2018, when I moved to this domain and started
The Storytelling Ape
. The search option should help you find any of the old posts you're looking for.
 

Dennis Junk Dennis Junk

Why Tamsin Shaw Imagines the Psychologists Are Taking Power

Upon first reading Shaw’s piece, I dismissed it as a particularly unscrupulous bit of interdepartmental tribalism—a philosopher bemoaning the encroachment by pesky upstart scientists into what was formerly the bailiwick of philosophers. But then a line in Shaw’s attempted rebuttal of Haidt and Pinker’s letter sent me back to the original essay, and this time around I recognized it as a manifestation of a more widespread trend among scholars, and a rather unscholarly one at that.

Tamsin Shaw’s essay in the February 25th issue of The New York Review of Books, provocatively titled “The Psychologists Take Power,” is no more scholarly than your average political attack ad, nor is it any more credible. (The article is available online, but I won’t lend it further visibility to search engines by linking to it here.) Two of the psychologists maligned in the essay, Jonathan Haidt and Steven Pinker, recently contributed a letter to the editors which effectively highlights Shaw’s faulty reasoning and myriad distortions, describing how she “prosecutes her case by citation-free attribution, spurious dichotomies, and standards of guilt by association that make Joseph McCarthy look like Sherlock Holmes” (82).

Upon first reading Shaw’s piece, I dismissed it as a particularly unscrupulous bit of interdepartmental tribalism—a philosopher bemoaning the encroachment by pesky upstart scientists into what was formerly the bailiwick of philosophers. But then a line in Shaw’s attempted rebuttal of Haidt and Pinker’s letter sent me back to the original essay, and this time around I recognized it as a manifestation of a more widespread trend among scholars, and a rather unscholarly one at that.

Shaw begins her article by accusing a handful of psychologists of exceeding the bounds of their official remit. These researchers have risen to prominence in recent years through their studies into human morality. But now, instead of restricting themselves, as responsible scientists would, to describing how we make moral judgements and attempting to explain why we respond to moral dilemmas the way we do, these psychologists have begun arrogating moral authority to themselves. They’ve begun, in other words, trying to tell us how we should reason morally—according to Shaw anyway. Her article then progresses through shady innuendo and arguments based on what Haidt and Pinker call “guilt through imaginability” to connect this group of authors to the CIA’s program of “enhanced interrogation,” i.e. torture, which culminated in such atrocities as those committed in the prisons at Abu Ghraib and Guantanamo Bay.

Shaw’s sole piece of evidence comes from a report that was commissioned by the American Psychological Association. David Hoffman and his fellow investigators did indeed find that two members of the APA played a critical role in developing the interrogation methods used by the CIA, and they had the sanction of top officials. Neither of the two, however, and none of those officials authored any of the books on moral psychology that Shaw is supposedly reviewing. In the report’s conclusion, the investigators describe the responses of clinical psychologists who “feel physically sick when they think about the involvement of psychologists intentionally using harsh interrogation techniques.” Shaw writes,

It is easy to imagine the psychologists who claim to be moral experts dismissing such a reaction as an unreliable “gut response” that must be overridden by more sophisticated reasoning. But a thorough distrust of rapid, emotional responses might well leave human beings without a moral compass sufficiently strong to guide them through times of crisis, when our judgement is most severely challenged, or to compete with powerful nonmoral motivations. (39)

What she’s referring to here is the two-system model of moral reasoning which posits a rapid, intuitive system, programmed in large part by our genetic inheritance but with some cultural variation in its expression, matched against a more effort-based, cerebral system that requires the application of complex reasoning.

But it must be noted that nowhere does any of the authors she’s reviewing make a case for a “thorough distrust of rapid, emotional responses.” Their positions are far more nuanced, and Haidt in fact argues in his book The Righteous Mind that liberals could benefit from paying more heed to some of their moral instincts—a case that Shaw herself summarizes in her essay when she’s trying to paint him as an overly “didactic” conservative.

            Haidt and Pinker’s response to Shaw’s argument by imaginability was to simply ask the other five authors she insinuates support torture whether they indeed reacted the way she describes. They write, “The results: seven out of seven said ‘no’” (82). These authors’ further responses to the question offer a good opportunity to expose just how off-base Shaw’s simplistic characterizations are.

None of these psychologists believes that a reaction of physical revulsion must be overridden or should be thoroughly distrusted. But several pointed out that in the past, people have felt physically sick upon contemplating homosexuality, interracial marriage, vaccination, and other morally unexceptionable acts, so gut feelings alone cannot constitute a “moral compass.” Nor is the case against “enhanced interrogation” so fragile, as Shaw implies, that it has to rest on gut feelings: the moral arguments against torture are overwhelming. So while primitive physical revulsion may serve as an early warning signal indicating that some practice calls for moral scrutiny, it is “the more sophisticated reasoning” that should guide us through times of crisis. (82-emphasis in original)

One phrase that should stand out here is “the moral arguments against torture are overwhelming.” Shaw is supposedly writing about a takeover by psychologists who advocate torture—but none of them actually advocates torture. And, having read four of the six books she covers, I can aver that this response was entirely predictable based on what the authors had written. So why does Shaw attempt to mislead her readers?

            The false implication that the authors she’s reviewing support torture isn’t the only central premise of Shaw’s essay that’s simply wrong; if these psychologists really are trying to take power, as she claims, that’s news to them. Haidt and Pinker begin their rebuttal by pointing out that “Shaw can cite no psychologist who claims special authority or ‘superior wisdom’ on moral matters” (82). Every one of them, with a single exception, in fact includes an explanation of what separates the two endeavors—describing human morality on the one hand, and prescribing values or behaviors on the other—in the very books Shaw professes to find so alarming. The lone exception, Yale psychologist Paul Bloom, author of Just Babies: The Origins of Good and Evil, wrote to Haidt and Pinker, “The fact that one cannot derive morality from psychological research is so screamingly obvious that I never thought to explicitly write it down” (82).

Yet Shaw insists all of these authors commit the fallacy of moving from is to ought; you have to wonder if she even read the books she’s supposed to be reviewing—beyond mining them for damning quotes anyway. And didn’t any of the editors at The New York Review think to check some of her basic claims? Or were they simply hoping to bank on the publication of what amounts to controversy porn? (Think of the dilemma faced by the authors: do you respond and draw more attention to the piece, or do you ignore it and let some portion of the readership come away with a wildly mistaken impression?)

            Haidt and Pinker do a fine job of calling out most of Shaw’s biggest mistakes and mischaracterizations. But I want to draw attention to two more instances of her falling short of any reasonable standard of scholarship, because each one reveals something important about the beliefs Shaw uses as her own moral compass. The authors under review situate their findings on human morality in a larger framework of theories about human evolution. Shaw characterizes this framework as “an unverifiable and unfalsifiable story about evolutionary psychology” (38). Shaw has evidently attended the Ken Ham school of evolutionary biology, which preaches that science can only concern itself with phenomena occurring right before our eyes in a lab. The reality is that, while testing adaptationist theories is a complicated endeavor, there are usually at least two ways to falsify them. You can show that the trait or behavior in question is absent in many cultures, or you can show that it emerges late in life after some sort of deliberate training. One of the books Shaw is supposedly reviewing, Bloom’s Just Babies, focuses specifically on research demonstrating that many of our common moral intuitions emerge when we’re babies, in our first year of life, with no deliberate training whatsoever.

            Bloom comes in for some more targeted, if off-hand, criticism near the conclusion of Shaw’s essay for an article he wrote to challenge the increasingly popular sentiment that we can solve our problems as a society by encouraging everyone to be more empathetic. Empathy, Bloom points out, is a finite resource; we’re simply not capable of feeling for every single one of the millions of individuals in need of care throughout the world. So we need to offer that care based on principle, not feeling. Shaw avoids any discussion of her own beliefs about morality in her essay, but from the nature of her mischaracterization of Bloom’s argument we can start to get a sense of the ideology informing her prejudices. She insists that when Paul Bloom, in his own Atlantic article, “The Dark Side of Empathy,” warns us that empathy for people who are seen as victims may be associated with violent, punitive tendencies toward those in authority, we should be wary of extrapolating from his psychological claims a prescription for what should and should not be valued, or inferring that we need a moral corrective to a culture suffering from a supposed excess of empathic feelings. (40-1)

The “supposed excess of empathic feelings” isn’t the only laughable distortion people who actually read Bloom’s essay will catch out; the actual examples he cites of when empathy for victims leads to “violent, punitive tendencies” include Donald Trump and Ann Coulter stoking outrage against undocumented immigrants by telling stories of the crimes a few of them commit. This misrepresentation raises an important question: why would Shaw want to mislead her readers into believing Bloom’s intention is to protect those in authority? This brings us to the McCathyesque part of Shaw’s attack ad.

            The sections of the essay drawing a web of guilt connecting the two psychologists who helped develop torture methods for the CIA to all the authors she’d have us believe are complicit focus mainly on Martin Seligman, whose theory of learned helplessness formed the basis of the CIA’s approach to harsh interrogation. Seligman is the founder of a subfield called Positive Psychology, which he developed as a counterbalance to what he perceived as an almost exclusive focus on all that can go wrong with human thinking, feeling, and behaving. His Positive Psychology Center at the University of Pennsylvania has received $31 million in recent years from the Department of Defense—a smoking gun by Shaw’s lights. And Seligman even admits that on several occasions he met with those two psychologists who participated in the torture program. The other authors Shaw writes about have in turn worked with Seligman on a variety of projects. Haidt even wrote a book on Positive Psychology called The Happiness Hypothesis.

            In Shaw’s view, learned helplessness theory is a potentially dangerous tool being wielded by a bunch of mad scientists and government officials corrupted by financial incentives and a lust for military dominance. To her mind, the notion that Seligman could simply want to help soldiers cope with the stresses of combat is all but impossible to even entertain. In this and every other instance when Shaw attempts to mislead her readers, it’s to put the same sort of negative spin on the psychologists’ explicitly stated positions. If Bloom says empathy has a dark side, then all the authors in question are against empathy. If Haidt argues that resilience—the flipside of learned helplessness—is needed to counteract a culture of victimhood, then all of these authors are against efforts to combat sexism and racism on college campuses. And, as we’ve seen, if these authors say we should question our moral intuitions, it’s because they want to be able to get away with crimes like torture. “Expertise in teaching people to override their moral intuitions is only a moral good if it serves good ends,” Shaw herself writes. “Those ends,” she goes on, “should be determined by rigorous moral deliberation” (40). Since this is precisely what the authors she’s criticizing say in their books, we’re left wondering what her real problem with them might be.

            In her reply to Haidt and Pinker’s letter, Shaw suggests her aim for the essay was to encourage people to more closely scrutinize the “doctrines of Positive Psychology” and the central principles underlying psychological theories about human morality. I was curious to see how she’d respond to being called out for mistakenly stating that the psychologists were claiming moral authority and that they were given to using their research to defend the use of torture. Her main response is to repeat the central aspects of her rather flimsy case against Seligman. But then she does something truly remarkable; she doesn’t deny using guilt by imaginability—she defends it.

Pinker and Haidt say they prefer reality to imagination, but imagination is the capacity that allows us to take responsibility, insofar as it is ever possible, for the ends for which our work will be used and the consequences that it will have in the world. Such imagination is a moral and intellectual virtue that clearly needs to be cultivated. (85)

So, regardless of what the individual psychologists themselves explicitly say about torture, for instance, as long as they’re equipping other people with the conceptual tools to justify torture, they’re still at least somewhat complicit. This was the line that first made me realize Shaw’s essay was something other than a philosopher munching on sour grapes.

            Shaw’s approach to connecting each of the individual authors to Seligman and then through him to the torture program is about as sophisticated, and about as credible, as any narrative concocted by your average online conspiracy theorist. But she believes that these connections are important and meaningful, a belief, I suspect, that derives from her own philosophy. Advocates of this philosophy, commonly referred to as postmodernismor poststructuralism, posit that our culture is governed by a dominant ideology that serves to protect and perpetuate the societal status quo, especially with regard to what are referred to as hegemonic relationships—men over women, whites over other ethnicities, heterosexuals over homosexuals. This dominant ideology finds expression in, while at the same time propagating itself through, cultural practices ranging from linguistic expressions to the creation of art to the conducting of scientific experiments.

            Inspired by figures like Louis Althusser and Michel Foucault, postmodern scholars reject many of the central principles of humanism, including its emphasis on the role of rational discourse in driving societal progress. This is because the processes of reasoning and research that go into producing knowledge can never be fully disentangled from the exercise of power, or so it is argued. We experience the world through the medium of culture, and our culture distorts reality in a way that makes hierarchies seem both natural and inevitable. So, according to postmodernists, not only does science fail to create true knowledge of the natural world and its inhabitants, but the ideas it generates must also be scrutinized to identify their hidden political implications.

            What such postmodern textual analyses look like in practice is described in sociologist Ullica Segerstrale’s book, Defenders of the Truth: The Sociobiology Debate. Segerstrale observed that postmodern critics of evolutionary psychology (which was more commonly called sociobiology in the late 90s), were outraged by what they presumed were the political implications of the theories, not by what evolutionary psychologists actually wrote. She explains,

In their analysis of their targets’ texts, the critics used a method I call moral reading. The basic idea behind moral reading was to imagine the worst possible political consequences of a scientific claim. In this way, maximum guilt might be attributed to the perpetrator of this claim. (206)  

This is similar to the type of imagination Shaw faults psychologists today for insufficiently exercising. For the postmodernists, the sum total of our cultural knowledge is what sustains all the varieties of oppression and injustice that exist in our society, so unless an author explicitly decries oppression or injustice he’ll likely be held under suspicion. Five of the six books Shaw subjects to her moral reading were written by white males. The sixth was written by a male and a female, both white. The people the CIA tortured were not white. So you might imagine white psychologists telling everyone not to listen to their conscience to make it easier for them reap the benefits of a history of colonization. Of course, I could be completely wrong here; maybe this scenario isn’t what was playing out in Shaw’s imagination at all. But that’s the problem—there are few limits to what any of us can imagine, especially when it comes to people we disagree with on hot-button issues.

            Postmodernism began in English departments back in the ‘60s where it was originally developed as an approach to analyzing literature. From there, it spread to several other branches of the humanities and is now making inroads into the social sciences. Cultural anthropology was the first field to be mostly overtaken. You can see precursors to Shaw’s rhetorical approach in attacks leveled against sociobiologists like E.O. Wilson and Napoleon Chagnon by postmodern anthropologists like Marshall Sahlins. In a review published in 2001, also in The New York Review of Books, Sahlins writes,

The ‘60s were the longest decade of the 20th century, and Vietnam was the longest war. In the West, the war prolonged itself in arrogant perceptions of the weaker peoples as instrumental means of the global projects of the stronger. In the human sciences, the war persists in an obsessive search for power in every nook and cranny of our society and history, and an equally strong postmodern urge to “deconstruct” it. For his part, Chagnon writes popular textbooks that describe his ethnography among the Yanomami in the 1960s in terms of gaining control over people.

Demonstrating his own power has been not only a necessary condition of Chagnon’s fieldwork, but a main technique of investigation.

The first thing to note is that Sahlin’s characterization of Chagnon’s books as narratives of “gaining control over people” is just plain silly; Chagnon was more often than not at the mercy of the Yanomamö. The second is that, just as anyone who’s actually read the books by Haidt, Pinker, Greene, and Bloom will be shocked by Shaw’s claim that their writing somehow bolsters the case for torture, anyone familiar with Chagnon’s studies of the Yanomamö will likely wonder what the hell they have to do with Vietnam, a war that to my knowledge he never expressed an opinion of in writing.

However, according to postmodern logic—or we might say postmodern morality—Chagnon’s observation that the Yanomamö were often violent, along with his espousal of a theory that holds such violence to have been common among preindustrial societies, leads inexorably to the conclusion that he wants us all to believe violence is part of our fixed nature as humans. Through the lens of postmodernism, Chagnon’s work is complicit in making people believe working for peace is futile because violence is inevitable. Chagnon may counter that he believes violence is likely to occur only in certain circumstances, and that by learning more about what conditions lead to conflict we can better equip ourselves to prevent it. But that doesn’t change the fact that society needs high-profile figures to bring before our modern academic version of the inquisition, so that all the other white men lording it over the rest of the world will see what happens to anyone who deviates from right (actually far-left) thinking.

Ideas really do have consequences of course, some of which will be unforeseen. The place where an idea ends up may even be repugnant to its originator. But the notion that we can settle foreign policy disputes, eradicate racism, end gender inequality, and bring about world peace simply by demonizing artists and scholars whose work goes against our favored party line, scholars and artists who maybe can’t be shown to support these evils and injustices directly but can certainly be imagined to be doing so in some abstract and indirect way—well, that strikes me as far-fetched. It also strikes me as dangerously misguided, since it’s not like scholars, or anyone else, ever needed any extra encouragement to imagine people who disagree with them being guilty of some grave moral offense. We’re naturally tempted to do that as it is.

Part of becoming a good scholar—part of becoming a grownup—is learning to live with people whose beliefs are different from yours, and to treat them fairly. Unless a particular scholar is openly and explicitly advocating torture, ascribing such an agenda to her is either irresponsible, if we’re unwittingly misrepresenting her, or dishonest, if we’re doing so knowingly. Arguments from imagined adverse consequences can go both ways. We could, for instance, easily write articles suggesting that Shaw is a Stalinist, or that she advocates prosecuting perpetrators of what members of the far left deem to be thought crimes. What about the consequences of encouraging suspicion of science in an age of widespread denial of climate change? Postmodern identity politics is this moment posing a threat to free speech on college campuses. And the tactics of postmodern activists begin and end with the stoking of moral outrage, so we could easily make a case that the activists are deliberately trying to instigate witch hunts. With each baseless accusation and counter-accusation, though, we’re getting farther and farther away from any meaningful inquiry, forestalling any substantive debate, and hamstringing any real moral or political progress.

Many people try to square the circle, arguing that postmodernism isn’t inherently antithetical to science, and that the supposed insights derived from postmodern scholarship ought to be assimilated somehow into science. When Thomas Huxley, the physician and biologist known as Darwin’s bulldog, said that science “commits suicide when it adopts a creed,” he was pointing out that by adhering to an ideology you’re taking its tenets for granted. Science, despite many critics’ desperate proclamations to the contrary, is not itself an ideology; science is an epistemology, a set of principles and methods for investigating nature and arriving at truths about the world. Even the most well-established of these truths, however, is considered provisional, open to potential revision or outright rejection as the methods, technologies, and theories that form the foundation of this collective endeavor advance over the generations.

In her essay, Shaw cites the results of a project attempting to replicate the findings of several seminal experiments in social psychology, counting the surprisingly low success rate as further cause for skepticism of the field. What she fails to appreciate here is that the replication project is being done by a group of scientists who are psychologists themselves, because they’re committed to honing their techniques for studying the human mind. I would imagine if Shaw’s postmodernist precursors had shared a similar commitment to assessing the reliability of their research methods, such as they are, and weighing the validity of their core tenets, then the ideology would have long since fallen out of fashion by the time she was taking up a pen to write about how scary psychologists are.  

The point Shaw's missing here is that it’s precisely this constant quest to check and recheck the evidence, refine and further refine the methods, test and retest the theories, that makes science, if not a source of superior wisdom, then still the most reliable approach to answering questions about who we are, what our place is in the universe, and what habits and policies will give us, as individuals and as citizens, the best chance to thrive and flourish. As Saul Perlmutter, one of the discoverers of dark energy, has said, “Science is an ongoing race between our inventing ways to fool ourselves, and our inventing ways to avoid fooling ourselves.” Shaw may be right that no experimental result could ever fully settle a moral controversy, but experimental results are often not just relevant to our philosophical deliberations but critical to keeping those deliberations firmly grounded in reality.

Popular relevant posts:

JUST ANOTHER PIECE OF SLEAZE: THE REAL LESSON OF ROBERT BOROFSKY'S "FIERCE CONTROVERSY"

THE IDIOCY OF OUTRAGE: SAM HARRIS'S RUN-INS WITH BEN AFFLECK AND NOAM CHOMSKY

(My essay on Greene’s book)

LAB FLIES: JOSHUA GREENE’S MORAL TRIBES AND THE CONTAMINATION OF WALTER WHITE

(My essay on Pinker’s book)

THE SELF-RIGHTEOUSNESS INSTINCT: STEVEN PINKER ON THE BETTER ANGELS OF MODERNITY AND THE EVILS OF MORALITY

(My essay on Haidt’s book)

THE ENLIGHTENED HYPOCRISY OF JONATHAN HAIDT'S RIGHTEOUS MIND

Read More
Dennis Junk Dennis Junk

How Violent Fiction Works: Rohan Wilson’s “The Roving Party” and James Wood’s Sanguinary Sublime from Conrad to McCarthy

James Wood criticized Cormac McCathy’s “No Country for Old Men” for being too trapped by its own genre tropes. Wood has a strikingly keen eye for literary registers, but he’s missing something crucial in his analysis of McCarthy’s work. Rohan Wilson’s “The Roving Party” works on some of the same principles as McCarthy’s work, and it shows that the authors’ visions extend far beyond the pages of any book.

            Any acclaimed novel of violence must be cause for alarm to anyone who believes stories encourage the behaviors depicted in them or contaminate the minds of readers with the attitudes of the characters. “I always read the book as an allegory, as a disguised philosophical argument,” writes David Shields in his widely celebrated manifesto Reality Hunger. Suspicious of any such disguised effort at persuasion, Shields bemoans the continuing popularity of traditional novels and agitates on behalf of a revolutionary new form of writing, a type of collage that is neither marked as fiction nor claimed as truth but functions rather as a happy hybrid—or, depending on your tastes, a careless mess—and is in any case completely lacking in narrative structure. This is because to him giving narrative structure to a piece of writing is itself a rhetorical move. “I always try to read form as content, style as meaning,” Shields writes. “The book is always, in some sense, stutteringly, about its own language” (197).

As arcane as Shields’s approach to reading may sound, his attempt to find some underlying message in every novel resonates with the preoccupations popular among academic literary critics. But what would it mean if novels really were primarily concerned with their own language, as so many students in college literature courses are taught? What if there really were some higher-order meaning we absorbed unconsciously through reading, even as we went about distracting ourselves with the details of description, character, and plot? Might a novel like Heart of Darkness, instead of being about Marlowe’s growing awareness of Kurtz’s descent into inhuman barbarity, really be about something that at first seems merely contextual and incidental, like the darkness—the evil—of sub-Saharan Africa and its inhabitants? Might there be a subtle prompt to regard Kurtz’s transformation as some breed of enlightenment, a fatal lesson encapsulated and propagated by Conrad’s fussy and beautifully tantalizing prose, as if the author were wielding the English language like the fastenings of a yoke over the entire continent?

Novels like Cormac McCarthy’s Blood Meridian and, more recently, Rohan Wilson’s The Roving Party, take place amid a transition from tribal societies to industrial civilization similar to the one occurring in Conrad’s Congo. Is it in this seeming backdrop that we should seek the true meaning of these tales of violence? Both McCarthy’s and Wilson’s novels, it must be noted, represent conspicuous efforts at undermining the sanitized and Manichean myths that arose to justify the displacement and mass killing of indigenous peoples by Europeans as they spread over the far-flung regions of the globe. The white men hunting “Indians” for the bounties on their scalps in Blood Meridian are as beastly and bloodthirsty as the savages peopling the most lurid colonial propaganda, just as the Europeans making up Wilson’s roving party are only distinguishable by the relative degrees of their moral degradation, all of them, including the protagonist, moving in the shadow of their chief quarry, a native Tasmanian chief.

If these novels are about their own language, their form comprising their true content, all in the service of some allegory or argument, then what pleasure would anyone get from them, suggesting as they do that to partake of the fruit of civilization is to become complicit in the original sin of the massacre that made way for it? “There is no document of civilization,” Walter Benjamin wrote, “that is not at the same time a document of barbarism.” It could be that to read these novels is to undergo a sort of rite of expiation, similar to the ritual reenactment of the crucifixion performed by Christians in the lead up to Easter. Alternatively, the real argument hidden in these stories may be still more insidious; what if they’re making the case that violence is both eternal and unavoidable, that it is in our nature to relish it, so there’s no more point in resisting the urge personally than in trying to bring about reform politically?

            Shields intimates that the reason we enjoy stories is that they warrant our complacency when he writes, “To ‘tell a story well’ is to make what one writes resemble the schemes people are used to—in other words, their ready-made idea of reality” (200). Just as we take pleasure in arguments for what we already believe, Shields maintains (explicitly) that we delight in stories that depict familiar scenes and resolve in ways compatible with our convictions. And this equating of the pleasure we take in reading with the pleasure we take in having our beliefs reaffirmed is another practice nearly universal among literary critics. Sophisticated readers know better than to conflate the ideas professed by villainous characters like the judge in Blood Meridian with those of the author, but, as one prominent critic complains,

there is often the disquieting sense that McCarthy’s fiction puts certain fond American myths under pressure merely to replace them with one vaster myth—eternal violence, or [Harold] Bloom’s “universal tragedy of blood.” McCarthy’s fiction seems to say, repeatedly, that this is how it has been and how it always will be.

What’s interesting about this interpretation is that it doesn’t come from anyone normally associated with Shields’s school of thought on literature. Indeed, its author, James Wood, is something of a scourge to postmodern scholars of Shields’s ilk.

Wood takes McCarthy to task for his alleged narrative dissemination of the myth of eternal violence in a 2005 New Yorker piece, Red Planet: The Sanguinary Sublime of Cormac McCarthy, a review of his then latest novel No Country for Old Men. Wood too, it turns out, hungers for reality in his novels, and he faults McCarthy’s book for substituting psychological profundity with the pabulum of standard plot devices. He insists that

the book gestures not toward any recognizable reality but merely toward the narrative codes already established by pulp thrillers and action films. The story is itself cinematically familiar. It is 1980, and a young man, Llewelyn Moss, is out antelope hunting in the Texas desert. He stumbles upon several bodies, three trucks, and a case full of money. He takes the money. We know that he is now a marked man; indeed, a killer named Anton Chigurh—it is he who opens the book by strangling the deputy—is on his trail.

Because McCarthy relies on the tropes of a familiar genre to convey his meaning, Wood suggests, that meaning can only apply to the hermetic universe imagined by that genre. In other words, any meaning conveyed in No Country for Old Men is rendered null in transit to the real world.

When Chigurh tells the blameless Carla Jean that “the shape of your path was visible from the beginning,” most readers, tutored in the rhetoric of pulp, will write it off as so much genre guff. But there is a way in which Chigurh is right: the thriller form knew all along that this was her end.

The acuity of Wood’s perception when it comes to the intricacies of literary language is often staggering, and his grasp of how diction and vocabulary provide clues to the narrator’s character and state of mind is equally prodigious. But, in this dismissal of Chigurh as a mere plot contrivance, as in his estimation of No Country for Old Men in general as a “morally empty book,” Wood is quite simply, quite startlingly, mistaken. And we might even say that the critical form knew all along that he would make this mistake.

           When Chigurh tells Carla Jean her path was visible, he’s not voicing any hardboiled fatalism, as Wood assumes; he’s pointing out that her predicament came about as a result of a decision her husband Llewelyn Moss made with full knowledge of the promised consequences. And we have to ask, could Wood really have known, before Chigurh showed up at the Moss residence, that Carla Jean would be made to pay for her husband’s defiance? It’s easy enough to point out superficial similarities to genre conventions in the novel (many of which it turns inside-out), but it doesn’t follow that anyone who notices them will be able to foretell how the book will end. Wood, despite his reservations, admits that No Country for Old Men is “very gripping.” But how could it be if the end were so predictable? And, if it were truly so morally empty, why would Wood care how it was going to end enough to be gripped? Indeed, it is in the realm of the characters’ moral natures that Wood is the most blinded by his reliance on critical convention. He argues,

Lewelyn Moss, the hunted, ought not to resemble Anton Chigurh, the hunter, but the flattening effect of the plot makes them essentially indistinguishable. The reader, of course, sides with the hunted. But both have been made unfree by the fake determinism of the thriller.

How could the two men’s fates be determined by the genre if in a good many thrillers the good guy, the hunted, prevails?

One glaring omission in Wood’s analysis is that Moss initially escapes undetected with the drug money he discovers at the scene of the shootout he happens upon while hunting, but he is then tormented by his conscience until he decides to return to the trucks with a jug of water for a dying man who begged him for a drink. “I’m fixin to go do somethin dumbern hell but I’m goin anyways,” he says to Carla Jean when she asks what he’s doing. “If I don’t come back tell Mother I love her” (24). Llewelyn, throughout the ensuing chase, is thus being punished for doing the right thing, an injustice that unsettles readers to the point where we can’t look away—we’re gripped—until we’re assured that he ultimately defeats the agents of that injustice. While Moss risks his life to give a man a drink, Chigurh, as Wood points out, is first seen killing a cop. Moreover, it’s hard to imagine Moss showing up to murder an innocent woman to make good on an ultimatum he’d presented to a man who had already been killed in the interim—as Chigurh does in the scene when he explains to Carla Jean that she’s to be killed because Llewelyn made the wrong choice.

Chigurh is in fact strangely principled, in a morally inverted sort of way, but the claim that he’s indistinguishable from Moss bespeaks a failure of attention completely at odds with the uncannily keen-eyed reading we’ve come to expect from Wood. When he revisits McCarthy’s writing in a review of the 2006 post-apocalyptic novel The Road, collected in the book The Fun Stuff, Wood is once again impressed by McCarthy’s “remarkable effects” but thoroughly baffled by “the matter of his meanings” (61). The novel takes us on a journey south to the sea with a father and his son as they scrounge desperately for food in abandoned houses along the way. Wood credits McCarthy for not substituting allegory for the answer to “a simpler question, more taxing to the imagination and far closer to the primary business of fiction making: what would this world without people look like, feel like?” But then he unaccountably struggles to sift out the novel’s hidden philosophical message. He writes,

A post-apocalyptic vision cannot but provoke dilemmas of theodicy, of the justice of fate; and a lament for the Deus absconditus is both implicit in McCarthy’s imagery—the fine simile of the sun that circles the earth “like a grieving mother with a lamp”—and explicit in his dialogue. Early in the book, the father looks at his son and thinks: “If he is not the word of God God never spoke.” There are thieves and murderers and even cannibals on the loose, and the father and son encounter these fearsome envoys of evil every so often. The son needs to think of himself as “one of the good guys,” and his father assures him that this is the side they are indeed on. (62)

We’re left wondering, is there any way to answer the question of what a post-apocalypse would be like in a story that features starving people reduced to cannibalism without providing fodder for genre-leery critics on the lookout for characters they can reduce to mere “envoys of evil”?

As trenchant as Wood is regarding literary narration, and as erudite—or pedantic, depending on your tastes—as he is regarding theology, the author of the excellent book How Fiction Works can’t help but fall afoul of his own, and his discipline’s, thoroughgoing ignorance when it comes to how plots work, what keeps the moral heart of a story beating. The way Wood fails to account for the forest comprised of the trees he takes such thorough inventory of calls to mind a line of his own from a chapter in The Fun Stuff about Edmund Wilson, describing an uncharacteristic failure on part of this other preeminent critic:

Yet the lack of attention to detail, in a writer whose greatness rests supremely on his use of detail, the unwillingness to talk of fiction as if narrative were a special kind of aesthetic experience and not a reducible proposition… is rather scandalous. (72)

To his credit, though, Wood never writes about novels as if they were completely reducible to their propositions; he doesn’t share David Shields’s conviction that stories are nothing but allegories or disguised philosophical arguments. Indeed, few critics are as eloquent as Wood on the capacity of good narration to communicate the texture of experience in a way all literate people can recognize from their own lived existences.

            But Wood isn’t interested in plot. He just doesn’t seem to like them. (There’s no mention of plot in either the table of contents or the index to How Fiction Works.) Worse, he shares Shields’s and other postmodern critics’ impulse to decode plots and their resolutions—though he also searches for ways to reconcile whatever moral he manages to pry from the story with its other elements. This is in fact one of the habits that tends to derail his reviews. Even after lauding The Road’s eschewal of easy allegory in place of the hard work of ground-up realism, Wood can’t help trying to decipher the end of the novel in the context of the religious struggle he sees taking place in it. He writes of the son’s survival,

The boy is indeed a kind of last God, who is “carrying the fire” of belief (the father and son used to speak of themselves, in a kind of familial shorthand, as people who were carrying the fire: it seems to be a version of being “the good guys”.) Since the breath of God passes from man to man, and God cannot die, this boy represents what will survive of humanity, and also points to how life will be rebuilt. (64)

This interpretation underlies Wood’s contemptuous attitude toward other reviewers who found the story uplifting, including Oprah, who used The Road as one of her book club selections. To Wood, the message rings false. He complains that

a paragraph of religious consolation at the end of such a novel is striking, and it throws the book off balance a little, precisely because theology has not seemed exactly central to the book’s inquiry. One has a persistent, uneasy sense that theodicy and the absent God have been merely exploited by the book, engaged with only lightly, without much pressure of interrogation. (64)

Inquiry? Interrogation? Whatever happened to “special kind of aesthetic experience”? Wood first places seemingly inconsequential aspects of the novel at the center of his efforts to read meaning into it, and then he faults the novel for not exploring these aspects at greater length. The more likely conclusion we might draw here is that Wood is simply and woefully mistaken in his interpretation of the book’s meaning. Indeed, Wood’s jump to theology, despite his insistence on its inescapability, is really quite arbitrary, one of countless themes a reader might possibly point to as indicative of the novel’s one true meaning.

Perhaps the problem here is the assumption that a story must have a meaning, some point that can be summed up in a single statement, for it to grip us. Getting beyond the issue of what statement the story is trying to make, we can ask what it is about the aesthetic experience of reading a novel that we find so compelling. For Wood, it’s clear the enjoyment comes from a sort of communion with the narrator, a felt connection forged by language, which effects an estrangement from his own mundane experiences by passing them through the lens of the character’s idiosyncratic vocabulary, phrasings, and metaphors. The sun dimly burning through an overcast sky looks much different after you’ve heard it compared to “a grieving mother with a lamp.” This pleasure in authorial communion and narrative immersion is commonly felt by the more sophisticated of literary readers. But what about less sophisticated readers? Many people who have a hard enough time simply understanding complex sentences, never mind discovering in them clues to the speaker’s personality, nevertheless become absorbed in narratives.

Developmental psychologists Karen Wynn and Paul Bloom, along with then graduate student Kiley Hamlin, serendipitously discovered a major clue to the mystery of why fictional stories engage humans’ intellectual and emotional faculties so powerfully while trying to determine at what age children begin to develop a moral sense. In a series of experiments conducted at the Yale Infant Cognition Center, Wynn and her team found that babies under a year old, even as young as three months, are easily induced to attribute agency to inanimate objects with nothing but a pair of crude eyes to suggest personhood. And, astonishingly, once agency is presumed, these infants begin attending to the behavior of the agents for evidence of their propensities toward being either helpfully social or selfishly aggressive—even when they themselves aren’t the ones to whom the behaviors are directed. 

            In one of the team’s most dramatic demonstrations, the infants watch puppet shows featuring what Bloom, in his book about the research program Just Babies, refers to as “morality plays” (30). Two rabbits respond to a tiger’s overture of rolling a ball toward them in different ways, one by rolling it back playfully, the other by snatching it up and running away with it. When the babies are offered a choice between the two rabbits after the play, they nearly always reach for the “good guy.” However, other versions of the experiment show that babies do favor aggressive rabbits over nice ones—provided that the victim is itself guilty of some previously witnessed act of selfishness or aggression. So the infants prefer cooperation over selfishness and punishment over complacency.

            Wynn and Hamlin didn’t intend to explore the nature of our fascination with fiction, but even the most casual assessment of our most popular stories suggests their appeal to audiences depends on a distinction similar to the one made by the infants in these studies. Indeed, the most basic formula for storytelling could be stated: good guy struggles against bad guy. Our interest is automatically piqued once such a struggle is convincingly presented, and it doesn’t depend on any proposition that can be gleaned from the outcome.

We favor the good guy because his (or her) altruism triggers an emotional response—we like him. And our interest in the ongoing developments of the story—the plot—are both emotional and dynamic. This is what the aesthetic experience of narrative consists of. 

            The beauty in stories comes from the elevation we get from the experience of witnessing altruism, and the higher the cost to the altruist the more elevating the story. The symmetry of plots is the balance of justice. Stories meant to disturb readers disrupt that balance.The crudest stories pit good guys against bad guys. The more sophisticated stories feature what we hope are good characters struggling against temptations or circumstances that make being good difficult, or downright dangerous. In other words, at the heart of any story is a moral dilemma, a situation in which characters must decide who deserves what fate and what they’re willing to pay to ensure they get it. The specific details of that dilemma are what we recognize as the plot.

The most basic moral, lesson, proposition, or philosophical argument inherent in the experience of attending to a story derives then not from some arbitrary decision on the part of the storyteller but from an injunction encoded in our genome. At some point in human evolution, our ancestor’s survival began to depend on mutual cooperation among all the members of the tribe, and so to this day, and from a startlingly young age, we’re on the lookout for anyone who might be given to exploiting the cooperative norms of our group. Literary critics could charge that the appeal of the altruist is merely another theme we might at this particular moment in history want to elevate to the status of most fundamental aspect of narrative. But I would challenge anyone who believes some other theme, message, or dynamic is more crucial to our engagement with stories to subject their theory to the kind of tests the interplay of selfish and altruistic impulses routinely passes in the Yale studies. Do babies care about theodicy? Are Wynn et al.’s morality plays about their own language?

This isn’t to say that other themes or allegories never play a role in our appreciation of novels. But whatever role they do play is in every case ancillary to the emotional involvement we have with the moral dilemmas of the plot. 1984 and Animal Farm are clear examples of allegories—but their greatness as stories is attributable to the appeal of their characters and the convincing difficulty of their dilemmas. Without a good plot, no one would stick around for the lesson. If we didn’t first believe Winston Smith deserved to escape Room 101 and that Boxer deserved a better fate than the knackery, we’d never subsequently be moved to contemplate the evils of totalitarianism. What makes these such powerful allegories is that, if you subtracted the political message, they’d still be great stories because they engage our moral emotions.

            What makes violence so compelling in fiction then is probably not that it sublimates our own violent urges, or that it justifies any civilization’s past crimes; violence simply ups the stakes for the moral dilemmas faced by the characters. The moment by moment drama in The Road, for instance, has nothing to do with whether anyone continues to believe in God. The drama comes from the father and son’s struggles to resist having to succumb to theft and cannibalism to survive. That’s the most obvious theme recurring throughout the novel. And you get the sense that were it not for the boy’s constant pleas for reassurance that they would never kill and eat anyone—the ultimate act of selfish aggression—and that they would never resort to bullying and stealing, the father quite likely would have made use of such expedients. The fire that they’re carrying is not the light of God; it’s the spark of humanity, the refusal to forfeit their human decency. (Wood doesn't catch that the fire was handed off from Sheriff Bell's father at the end of No Country.) The boy may very well be a redeemer, in that he helps his father make it to the end of his life with a clear conscience, but unless you believe morality is exclusively the bailiwick of religion God’s role in the story is marginal at best.

            What the critics given to dismissing plots as pointless fabrications fail to consider is that just as idiosyncratic language and simile estranges readers from their mundane existence so too the high-stakes dilemmas that make up plots can make us see our own choices in a different light, effecting their own breed of estrangement with regard to our moral perceptions and habits. In The Roving Party, set in the early nineteenth century, Black Bill, a native Tasmanian raised by a white family, joins a group of men led by a farmer named John Batman to hunt and kill other native Tasmanians and secure the territory for the colonials. The dilemmas Bill faces are like nothing most readers will ever encounter. But their difficulty is nonetheless universally understandable. In the following scene, Bill, who is also called the Vandemonian, along with a young boy and two native scouts, watches as Batman steps up to a wounded clansman in the aftermath of a raid on his people.

Batman considered the silent man secreted there in the hollow and thumbed back the hammers. He put one foot either side of the clansman’s outstretched legs and showed him the long void of those bores, standing thus prepared through a few creakings of the trees. The warrior was wide-eyed, looking to Bill and to the Dharugs.

The eruption raised the birds squealing from the branches. As the gunsmoke cleared the fellow slumped forward and spilled upon the soil a stream of arterial blood. The hollow behind was peppered with pieces of skull and other matter. John Batman snapped open the locks, cleaned out the pans with his cloth and mopped the blood off the barrels. He looked around at the rovers.

The boy was openmouthed, pale, and he stared at the ruination laid out there at his feet and stepped back as the blood ran near his rags. The Dharugs had by now turned away and did not look back. They began retracing their track through the rainforest, picking among the fallen trunks. But Black Bill alone among that party met Batman’s eye. He resettled his fowling piece across his back and spat on the ferns, watching Batman. Batman pulled out his rum, popped loose the cork, and drank. He held out the vessel to Bill. The Vandemonian looked at him. Then he turned to follow the Parramatta men out among the lemon myrtles and antique pines. (92)

If Rohan Wilson had wanted to expound on the evils of colonialism in Tasmania, he might have written about how Batman, a real figure from history, murdered several men he could easily have taken prisoner. But Wilson wanted to tell a story, and he knew that dilemmas like this one would grip our emotions. He likewise knew he didn’t have to explain that Bill, however much he disapproves of the murder, can’t afford to challenge his white benefactor in any less subtle manner than meeting his eyes and refusing his rum.

            Unfortunately, Batman registers the subtle rebuke all too readily. Instead of killing a native lawman wounded in a later raid himself, Batman leaves the task to Bill, who this time isn’t allowed the option of silently disapproving. But the way Wilson describes Bill’s actions leaves no doubt in the reader’s mind about his feelings, and those feelings have important consequences for how we feel about the character.

Black Bill removed his hat. He worked back the heavy cocks of both barrels and they settled with a dull clunk. Taralta clutched at his swaddled chest and looked Bill in the eyes, as wordless as ground stone. Bill brought up the massive gun and steadied the barrels across his forearm as his broken fingers could not take the weight. The sight of those octagonal bores levelled on him caused the lawman to huddle down behind his hands and cry out, and Bill steadied the gun but there was no clear shot he might take. He waited.

                        See now, he said. Move your hands.

            The lawman crabbed away over the dirt, still with his arms upraised, and Bill followed him and kicked him in the bandaged ribs and kicked at his arms.

                        menenger, Bill said, menenger.

The lawman curled up more tightly. Bill brought the heel of his boot to bear on the wounded man but he kicked in vain while Taralta folded his arms ever tighter around his head.

Black Bill lowered the gun. Wattlebirds made their yac-a-yac coughs in the bush behind and he gazed at the blue hills to the south and the snow clouds forming above them. When Bill looked again at the lawman he was watching through his hands, dirt and ash stuck in the cords of his ochred hair. Bill brought the gun up, balanced it across his arm again and tucked the butt into his shoulder. Then he fired into the lawman’s head.

The almighty concussion rattled the wind in his chest and the gun bucked from his grip and fell. He turned away, holding his shoulder. Blood had spattered his face, his arms, the front of his shirt. For a time he would not look at the body of the lawman where it lay near the fire. He rubbed at the bruising on his shoulder; watched storms amass around the southern peaks. After a while he turned to survey the slaughter he had wrought.

One of the lawman’s arms was gone at the elbow and the teeth seated in the jawbone could be seen through the cheek. There was flesh blown every place. He picked up the Manton gun. The locks were soiled and he fingered out the grime, and then with the corner of his coat cleaned the pan and blew into the latchworks. He brought the weapon up to eye level and peered along its sights for barrel warps or any misalignment then, content, slung the leather on his shoulder. Without a rearward glance he stalked off, his hat replaced, his boots slipping in the blood. Smoke from the fire blew around him in a snarl raised on the wind and dispersed again on the same. (102-4)

Depending on their particular ideological bent, critics may charge that a scene like this simply promotes the type of violence it depicts, or that it encourages a negative view of native Tasmanians—or indigenous peoples generally—as of such weak moral fiber that they can be made to turn against their own countrymen. And pointing out that the aspect of the scene that captures our attention is the process, the experience, of witnessing Bill’s struggle to resolve his dilemma would do little to ease their worries; after all, even if the message is ancillary, its influence could still be pernicious.

            The reason that critics applying their favored political theories to their analyses of fiction so often stray into the realm of the absurd is that the only readers who experience stories the same way as they do will be the ones who share the same ideological preoccupations. You can turn any novel into a Rorschach, pulling out disparate shapes and elements to blur into some devious message. But any reader approaching the writing without your political theories or your critical approach will likely come away with a much more basic and obvious lesson. Black Bill’s dilemma is that he has to kill many of his fellow Tasmanians if he wants to continue living as part of a community of whites. If readers take on his attitude toward killing as it’s demonstrated in the scene when he kills Taralta, they’ll be more reluctant to do it, not less. Bill clearly loathes what he’s forced to do. And if any race comes out looking bad it’s the whites, since they’re the ones whose culture forces Bill to choose between his family’s well-being and the dictates of his conscience.

Readers likely have little awareness of being influenced by the overarching themes in their favorite stories, but upon reflection the meaning of those themes is usually pretty obvious. Recent research into how reading the Harry Potter books has impacted young people’s political views, for instance, shows that fans of the series are more accepting of out-groups, more tolerant, less predisposed to authoritarianism, more supporting of equality, and more opposed to violence and torture. Anthony Gierzynsky, the author of the study, points out, “As Harry Potter fans will have noted, these are major themes repeated throughout the series.” The messages that reach readers are the conspicuous ones, not the supposedly hidden ones critics pride themselves on being able to suss out. 

            It’s an interesting question just how wicked stories could persuade us to be, relying as they do on our instinctual moral sense. Fans could perhaps be biased toward evil by themes about the threat posed by some out-group, or the debased nature of the lower orders, or nonbelievers in the accepted deities—since the salience of these concepts likewise seems to be inborn. But stories told from the perspective of someone belonging to the persecuted group could provide an antidote. At any rate, there’s a solid case to be made that novels have helped the moral of arc of history bend toward greater justice and compassion.

            Even a novel with violence as pervasive and chaotic as it is in Blood Meridian sets up a moral gradient for the characters to occupy—though finding where the judge fits is a quite complicated endeavor—and the one with the most qualms about killing happens to be the protagonist, referred to simply as the kid. “You alone were mutinous,” the judge says to him. “You alone reserved in your soul some corner of clemency for the heathen” (299). The kid’s character is revealed much the way Black Bill’s is in The Roving Party, as readers witness him working through high-stakes dilemmas. After drawing arrows to determine who in the band of scalp hunters will stay behind to kill some of their wounded (to prevent a worse fate at the hands of the men pursuing them), the kid finds himself tasked with euthanizing a man who would otherwise survive.

                        You wont thank me if I let you off, he said.

                        Do it then you son of a bitch.

            The kid sat. A light wind was blowing out of the north and some doves had begun to call in the thicket of greasewood behind them.

                        If you want me just to leave you I will.

                        Shelby didnt answer

                        He pushed a furrow in the sand with the heel of his boot. You’ll have to say.

                        Will you leave me a gun?

                        You know I can’t leave you no gun.

                        You’re no better than him. Are you?

                        The kid didn’t answer. (208)

That “him” is ambiguous; it could either be Glanton, the leader of the gang whose orders the kid is ignoring, or the judge, who engages him throughout the later parts of the novel in a debate about the necessity of violence in history. We know by now that the kid really is better than the judge—at least in the sense that Shelby means. And the kid handles the dilemma, as best he can, by hiding Shelby in some bushes and leaving him with a canteen of water.

These three passages from The Roving Party and Blood Meridian reveal as well something about the language commonly used by authors of violent novels going back to Conrad (perhaps as far back as Tolstoy). Faced with the choice of killing a man—or of standing idly by and allowing him to be killed—the characters hesitate, and the space of their hesitation is filled with details like the type of birdsong that can be heard. This style of “dirty realism,” a turning away from abstraction, away even from thought, to focus intensely on physical objects and the natural world, frustrates critics like James Wood because they prefer their prose to register the characters’ meanderings of mind in the way that only written language can. Writing about No Country for Old Men, Wood complains about all the labeling and descriptions of weapons and vehicles to the exclusion of thought and emotion.

Here is Hemingway’s influence, so popular in male American fiction, of both the pulpy and the highbrow kind. It recalls the language of “A Farewell to Arms”: “He looked very dead. It was raining. I had liked him as well as anyone I ever knew.” What appears to be thought is in fact suppressed thought, the mere ratification of male taciturnity. The attempt to stifle sentimentality—“He looked very dead”—itself comes to seem a sentimental mannerism. McCarthy has never been much interested in consciousness and once declared that as far as he was concerned Henry James wasn’t literature. Alas, his new book, with its gleaming equipment of death, its mindless men and absent (but appropriately sentimentalized) women, its rigid, impacted prose, and its meaningless story, is perhaps the logical result of a literary hostility to Mind.

Here again Wood is relaxing his otherwise razor-keen capacity for gleaning insights from language and relying instead on the anemic conventions of literary criticism—a discipline obsessed with the enactment of gender roles. (I’m sure Suzanne Collins would be amused by this idea of masculine taciturnity.) But Wood is right to recognize the natural tension between a literature of action and a literature of mind. Imagine how much the impact of Black Bill’s struggle with the necessity of killing Taralta would be blunted if we were privy to his thoughts, all of which are implicit in the scene as Wilson has rendered it anyway.

            Fascinatingly, though, it seems that Wood eventually realized the actual purpose of this kind of evasive prose—and it was Cormac McCarthy he learned it from. As much as Wood lusts after some leap into theological lucubration as characters reflect on the lessons of the post-apocalypse or the meanings of violence, the psychological reality is that it is often in the midst of violence or when confronted with imminent death that people are least given to introspection. As Wood explains in writing about the prose style of The Road,

McCarthy writes at one point that the father could not tell the son about life before the apocalypse: “He could not construct for the child’s pleasure the world he’d lost without constructing the loss as well and thought perhaps the child had known this better than he.” It is the same for the book’s prose style: just as the father cannot construct a story for the boy without also constructing the loss, so the novelist cannot construct the loss without the ghost of the departed fullness, the world as it once was. (55)

The rituals of weapon reloading, car repair, and wound wrapping that Wood finds so offputtingly affected in No Country for Old Men are precisely the kind of practicalities people would try to engage their minds with in the aftermath of violence to avoid facing the reality. But this linguistic and attentional coping strategy is not without moral implications of its own.

            In the opening of The Roving Party, Black Bill receives a visit from some of the very clansmen he’s been asked by John Batman to hunt. The headman of the group is a formidable warrior named Manalargena (another real historical figure), who is said to have magical powers. He has come to recruit Bill to help in fighting against the whites, unaware of Bill’s already settled loyalties. When Bill refuses to come fight with Manalargena, the headman’s response is to tell a story about two brothers who live near a river where they catch plenty of crayfish, make fires, and sing songs. Then someone new arrives on the scene:

Hunter come to the river. He is hungry hunter you see. He want crayfish. He see them brother eating crayfish, singing song. He want crayfish too. He bring up spear. Here the headman made as if to raise something. He bring up that spear and he call out: I hungry, you give me that crayfish. He hold that spear and he call out. But them brother they scared you see. They scared and they run. They run and they change. They change to wallaby and they jump. Now they jump and jump and the hunter he follow them.

So hunter he change too. He run and he change to that wallaby and he jump. Now three wallaby jump near river. They eat grass. They forget crayfish. They eat grass and they drink water and they forget crayfish. Three wallaby near the river. Very big river. (7-8)

Bill initially dismisses the story, saying it makes no sense. Indeed, as a story, it’s terrible. The characters have no substance and the transformation seems morally irrelevant. The story is pure allegory. Interestingly, though, by the end of the novel, its meaning is perfectly clear to Bill. Taking on the roles of hunter and hunted leaves no room for songs, no place for what began the hunt in the first place, creating a life closer to that of animals than of humans. There are no more fires.

            Wood counts three registers authors like Conrad and McCarthy—and we can add Wilson—use in their writing. The first is the dirty realism that conveys the characters’ unwillingness to reflect on their circumstances or on the state of their souls. The third is the lofty but oblique discourse on God’s presence or absence in a world of tragedy and carnage Wood finds so ineffectual. For most readers, though, it’s the second register that stands out. Here’s how Wood describes it:

Hard detail and a fine eye is combined with exquisite, gnarled, slightly antique (and even slightly clumsy or heavy) lyricism. It ought not to work, and sometimes it does not. But many of its effects are beautiful—and not only beautiful, but powerfully efficient as poetry. (59)

This description captures what’s both great and frustrating about the best and worst lines in these authors’ novels. But Wood takes the tradition for granted without asking why this haltingly graceful and heavy-handedly subtle language is so well-suited to these violent stories. The writers are compelled to use this kind of language by the very effects of the plot and setting that critics today so often fail to appreciate—though Wood does gesture toward it in the title of his essay on No Country for Old Men. The dream logic of song and simile that goes into the aesthetic experience of bearing witness to the characters sparsely peopling the starkly barren and darkly ominous landscapes of these novels carries within it the breath of the sublime.

            In coming to care about characters whose fates unfold in the aftermath of civilization, or in regions where civilization has yet to take hold, places where bloody aggression and violent death are daily concerns and witnessed realities, we’re forced to adjust emotionally to the worlds they inhabit. Experiencing a single death brings a sense of tragedy, but coming to grips with a thousand deaths has a more curious effect. And it is this effect that the strange tangles of metaphorical prose both gesture toward and help to induce. The sheer immensity of the loss, the casual brushing away of so many bodies and the blotting out of so much unique consciousness, overstresses the capacity of any individual to comprehend it. The result is paradoxical, a fixation on the material objects still remaining, and a sliding off of one’s mind onto a plane of mental existence where the material has scarcely any reality at all because it has scarcely any significance at all. The move toward the sublime is a lifting up toward infinite abstraction, the most distant perspective ever possible on the universe, where every image is a symbol for some essence, where every embrace is a symbol for human connectedness, where every individual human is a symbol for humanity. This isn’t the abstraction of logic, the working out of implications about God or cosmic origins. It’s the abstraction of the dream or the religious experience, an encounter with the sacred and the eternal, a falling and fading away of the world of the material and the particular and the mundane.

            The prevailing assumption among critics and readers alike is that fiction, especially literary fiction, attempts to represent some facet of life, so the nature of a given representation can be interpreted as a comment on whatever is being represented. But what if the representations, the correspondences between the fictional world and the nonfictional one, merely serve to make the story more convincing, more worthy of our precious attention? What if fiction isn’t meant to represent reality so much as to alter our perceptions of it? Critics can fault plots like the one in No Country for Old Men, and characters like Anton Chigurh, for having no counterparts outside the world of the story, mooting any comment about the real world the book may be trying to make. But what if the purpose of drawing readers into fictional worlds is to help them see their own worlds anew by giving them a taste of what it would be like to live a much different existence? Even the novels that hew more closely to the mundane, the unremarkable passage of time, are condensed versions of the characters’ lives, encouraging readers to take a broader perspective on their own. The criteria we should apply to our assessments of novels then would not be how well they represent reality and how accurate or laudable their commentaries are. We should instead judge novels by how effectively they pull us into the worlds they create for themselves and how differently we look at our own world in the wake of the experience. And since high-stakes moral dilemmas are the heart of stories we might wonder what effect the experience of witnessing them will have on our own lower-stakes lives.

Also read:

HUNGER GAME THEORY: Post-Apocalyptic Fiction and the Rebirth of Humanity

Life's White Machine: James Wood and What Doesn't Happen in Fiction

LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME

Read More
Dennis Junk Dennis Junk

Science’s Difference Problem: Nicholas Wade’s Troublesome Inheritance and the Missing Moral Framework for Discussing the Biology of Behavior

Nicholas Wade went there. In his book “A Troublesome Inheritance,” he argues that not only is race a real, biological phenomenon, but one that has potentially important implications for our understanding of the fates of different peoples. Is it possible to even discuss such things without being justifiably labeled a racist? More importantly, if biological differences do show up in the research, how can we discuss them without being grossly immoral?

            No sooner had Nicholas Wade’s new book become available for free two-day shipping than a contest began to see who could pen the most devastating critical review of it, the one that best satisfies our desperate urge to dismiss Wade’s arguments and reinforce our faith in the futility of studying biological differences between human races, a faith backed up by a cherished official consensus ever so conveniently in line with our moral convictions. That Charles Murray, one of the authors of the evil tome The Bell Curve, wrote an early highly favorable review for the Wall Street Journal only upped the stakes for all would-be champions of liberal science. Even as the victor awaits crowning, many scholars are posting links to their favorite contender’s critiques all over social media to advertise their principled rejection of this book they either haven’t read yet or have no intention of ever reading.

You don’t have to go beyond the title, A Troublesome Inheritance: Genes, Race and Human History, to understand what all these conscientious intellectuals are so eager to distance themselves from—and so eager to condemn. History has undeniably treated some races much more poorly than others, so if their fates are in any small way influenced by genes the implication of inferiority is unavoidable. Regardless of what he actually says in the book, Wade’s very program strikes many as racist from its inception.

            The going theories for the dawn of the European Enlightenment and the rise of Western culture—and western people—to global ascendency attribute the phenomenon to a combination of geographic advantages and historical happenstance. Wade, along with many other scholars, finds such explanations unsatisfying. Geography can explain why some societies never reached sufficient population densities to make the transition into states. “Much harder to understand,” Wade writes, “is how Europe and Asia, lying on much the same lines of latitude, were driven in the different directions that led to the West’s dominance” (223). Wade’s theory incorporates elements of geography—like the relatively uniform expanse of undivided territory between the Yangtze and Yellow rivers that facilitated the establishment of autocratic rule, and the diversity of fragmented regions in Europe preventing such consolidation—but he goes on to suggest that these different environments would have led to the development of different types of institutions. Individuals more disposed toward behaviors favored by these institutions, Wade speculates, would be rewarded with greater wealth, which would in turn allow them to have more children with behavioral dispositions similar to their own.

            After hundreds of years and multiple generations, Wade argues, the populations of diverse regions would respond to these diverse institutions by evolving subtly different temperaments. In China for instance, favorable, and hence selected for traits may have included intelligence, conformity, and obedience. These behavioral propensities would subsequently play a role in determining the future direction of the institutions that fostered their evolution. Average differences in personality would, according to Wade, also make it more or less likely that certain new types of institution would arise within a given society, or that they could be successfully transplanted into it. And it’s a society’s institutions that ultimately determine its fate relative to other societies. To the objection that geography can, at least in principle, explain the vastly different historical outcomes among peoples of specific regions, Wade responds, “Geographic determinism, however, is as absurd a position as genetic determinism, given that evolution is about the interaction between the two” (222).

            East Asians score higher on average on IQ tests than people with European ancestry, but there’s no evidence that any advantage they enjoy in intelligence, or any proclivity they may display toward obedience and conformity—traits supposedly manifest in their long history of autocratic governance—is attributable to genetic differences as opposed to traditional attitudes toward schoolwork, authority, and group membership inculcated through common socialization practices. So we can rest assured that Wade’s just-so story about evolved differences between the races in social behavior is eminently dismissible. Wade himself at several points throughout A Troublesome Inheritance admits that his case is wholly speculative. So why, given the abominable history of racist abuses of evolutionary science, would Wade publish such a book?

It’s not because he’s unaware of the past abuses. Indeed, in his second chapter, titled “Perversions of Science,” which none of the critical reviewers deigns to mention, Wade chronicles the rise of eugenics and its culmination in the Holocaust. He concludes,

After the Second World War, scientists resolved for the best of reasons that genetics research would never again be allowed to fuel the racial fantasies of murderous despots. Now that new information about human races has been developed, the lessons of the past should not be forgotten and indeed are all the more relevant. (38)

The convention among Wade’s critics is to divide his book into two parts, acknowledge that the first is accurate and compelling enough, and then unload the full academic arsenal of both scientific and moral objections to the second. This approach necessarily scants a few important links in his chain of reasoning in an effort to reduce his overall point to its most objectionable elements. And for all their moralizing, the critics, almost to a one, fail to consider Wade’s expressed motivation for taking on such a fraught issue.

            Even acknowledging Wade’s case is weak for the role of biological evolution in historical developments like the Industrial Revolution, we may still examine his reasoning up to that point in the book, which may strike many as more firmly grounded. You can also start to get a sense of what was motivating Wade when you realize that the first half of A Troublesome Inheritance recapitulates his two previous books on human evolution. The first, Before the Dawn, chronicled the evolution and history of our ancestors from a species that resembled a chimpanzee through millennia as tribal hunter-gatherers to the first permanent settlements and the emergence of agriculture. Thus, we see that all along his scholarly interest has been focused on major transitions in human prehistory.

While critics of Wade’s latest book focus almost exclusively on his attempts at connecting genomics to geopolitical history, he begins his exploration of differences between human populations by emphasizing the critical differences between humans and chimpanzees, which we can all agree came about through biological evolution. Citing a number of studies comparing human infants to chimps, Wade writes in A Troublesome Inheritance,

Besides shared intentions, another striking social behavior is that of following norms, or rules generally agreed on within the “we” group. Allied with the rule following are two other basic principles of human social behavior. One is a tendency to criticize, and if necessary punish, those who do not follow the agreed-upon norms. Another is to bolster one’s own reputation, presenting oneself as an unselfish and valuable follower of the group’s norms, an exercise that may involve finding fault with others. (49)

What separates us from chimpanzees and other apes—including our ancestors—is our much greater sociality and our much greater capacity for cooperation. (Though primatologist Frans de Waal would object to leaving the much more altruistic bonobos out of the story.) The basis for these changes was the evolution of a suite of social emotions—emotions that predispose us toward certain types of social behaviors, like punishing those who fail to adhere to group norms (keeping mum about genes and race for instance). If there’s any doubt that the human readiness to punish wrongdoers and rule violators is instinctual, ongoing studies demonstrating this trait in children too young to speak make the claim that the behavior must be taught ever more untenable. The conclusion most psychologists derive from such studies is that, for all their myriad manifestations in various contexts and diverse cultures, the social emotions of humans emerge from a biological substrate common to us all.  

            After Before the Dawn, Wade came out with The Faith Instinct, which explores theories developed by biologist David Sloan Wilson and evolutionary psychologist Jesse Bering about the adaptive role of religion in human societies. In light of cooperation’s status as one of the most essential behavioral differences between humans and chimps, other behaviors that facilitate or regulate coordinated activity suggest themselves as candidates for having pushed our ancestors along the path toward several key transitions. Language for instance must have been an important development. Religion may have been another. As Wade argues in A Troublesome Inheritance

The fact that every known society has a religion suggests that each inherited a propensity for religion from the ancestral human population. The alternative explanation, that each society independently invented and maintained this distinctive human behavior, seems less likely. The propensity for religion seems instinctual, rather than purely cultural, because it is so deeply set in the human mind, touching emotional centers and appearing with such spontaneity. There is a strong evolutionary reason, moreover, that explains why religion may have become wired in the neural circuitry. A major function of religion is to provide social cohesion, a matter of particular importance among early societies. If the more cohesive societies regularly prevailed over the less cohesive, as would be likely in any military dispute, an instinct for religious behavior would have been strongly favored by natural selection. This would explain why the religious instinct is universal. But the particular form that religion takes in each society depends on culture, just as with language. (125-6)

As is evident in this passage, Wade never suggests any one-to-one correspondence between genes and behaviors. Genes function in the context of other genes in the context of individual bodies in the context of several other individual bodies. But natural selection is only about outcomes with regard to survival and reproduction. The evolution of social behavior must thus be understood as taking place through the competition, not just of individuals, but also of institutions we normally think of as purely cultural.

            The evolutionary sequence Wade envisions begins with increasing sociability enforced by a tendency to punish individuals who fail to cooperate, and moves on to tribal religions which involve synchronized behaviors, unifying beliefs, and omnipresent but invisible witnesses who discourage would-be rule violators. Once humans began living in more cohesive groups, behaviors that influenced the overall functioning of those groups became the targets of selection. Religion may have been among the first institutions that emerged to foster cohesion, but others relying on the same substrate of instincts and emotions would follow. Tracing the trajectory of our prehistory from the origin of our species in Africa, to the peopling of the world’s continents, to the first permanent settlements and the adoption of agriculture, Wade writes,

The common theme of all these developments is that when circumstances change, when a new resource can be exploited or a new enemy appears on the border, a society will change its institutions in response. Thus it’s easy to see the dynamics of how human social change takes place and why such a variety of human social structures exists. As soon as the mode of subsistence changes, a society will develop new institutions to exploit its environment more effectively. The individuals whose social behavior is better attuned to such institutions will prosper and leave more children, and the genetic variations that underlie such a behavior will become more common. (63-4)

First a society responds to shifting pressures culturally, but a new culture amounts to a new environment for individuals to adapt to. Wade understands that much of this adaptation occurs through learning. Some of the challenges posed by an evolving culture will, however, be easier for some individuals to address than others. Evolutionary anthropologists tend to think of culture as a buffer between environments and genes. Many consider it more of a wall. To Wade, though, culture is merely another aspect of the environment individuals and their genes compete to thrive in.

If you’re a cultural anthropologist and you want to study how cultures change over time, the most convenient assumption you can make is that any behavioral differences you observe between societies or over periods of time are owing solely to the forces you’re hoping to isolate. Biological changes would complicate your analysis. If, on the other hand, you’re interested in studying the biological evolution of social behaviors, you will likely be inclined to assume that differences between cultures, if not based completely on genetic variance, at least rest on a substrate of inherited traits. Wade has quite obviously been interested in social evolution since his first book on anthropology, so it’s understandable that he would be excited about genome studies suggesting that human evolution has been operating recently enough to affect humans in distantly separated regions of the globe. And it’s understandable that he’d be frustrated by sanctions against investigating possible behavioral differences tied to these regional genetic differences. But this doesn’t stop his critics from insinuating that his true agenda is something other than solely scientific.

            On the technology and pop culture website io9, blogger and former policy analyst Annalee Newitz calls Wade’s book an “argument for white supremacy,” which goes a half-step farther than the critical review by Eric Johnson the post links to, titled "On the Origin of White Power." Johnson sarcastically states that Wade isn’t a racist and acknowledges that the author is correct in pointing out that considering race as a possible explanatory factor isn’t necessarily racist. But, according to Johnson’s characterization,

He then explains why white people are better because of their genes. In fairness, Wade does not say Caucasians are betterper se, merely better adapted (because of their genes) to the modern economic institutions that Western society has created, and which now dominate the world’s economy and culture.

The clear implication here is that Wade’s mission is to prove that the white race is superior but that he also wanted to cloak this agenda in the garb of honest scientific inquiry. Why else would Wade publish his problematic musings? Johnson believes that scientists and journalists should self-censor speculations or as-yet unproven theories that could exacerbate societal injustices. He writes, “False scientific conclusions, often those that justify certain well-entrenched beliefs, can impact peoples’ lives for decades to come, especially when policy decisions are based on their findings.” The question this position begs is how certain can we be that any scientific “conclusion”—Wade would likely characterize it as an exploration—is indeed false before it’s been made public and become the topic of further discussion and research?

Johnson’s is the leading contender for the title of most devastating critique of A Troublesome Inheritance, and he makes several excellent points that severely undermine parts of Wade’s case for natural selection playing a critical role in recent historical developments. But, like H. Allen Orr’s critique in The New York Review, the first runner-up in the contest, Johnson’s essay is oozing with condescension and startlingly unselfconscious sanctimony. These reviewers profess to be standing up for science even as they ply their readers with egregious ad hominem rhetoric (Wade is just a science writer, not a scientist) and arguments from adverse consequences (racist groups are citing Wade’s book in support of their agendas), thereby underscoring another of Wade’s arguments—that the case against racial differences in social behavior is at least as ideological as it is scientific. Might the principle that researchers should go public with politically sensitive ideas or findings only after they’ve reached some threshold of wider acceptance end up stifling free inquiry? And, if Wade’s theories really are as unlikely to bear empirical or conceptual fruit as his critics insist, shouldn’t the scientific case against them be enough? Isn’t all the innuendo and moral condemnation superfluous—maybe even a little suspicious?

            White supremacists may get some comfort from parts of Wade’s book, but if they read from cover to cover they’ll come across plenty of passages to get upset about. In addition to the suggestion that Asians are more intelligent than Caucasians, there’s the matter of the entire eighth chapter, which describes a scenario for how Ashkenazi Jews became even more intelligent than Asians and even more creative and better suited to urban institutions than Caucasians of Northern European ancestry. Wade also points out more than once that the genetic differences between the races are based, not on the presence or absence of single genes, but on clusters of alleles occurring with varying frequencies. He insists that

the significant differences are those between societies, not their individual members. But minor variations in social behavior, though barely perceptible, if at all, in an individual, combine to create societies of very different character. (244)

In other words, none of Wade’s speculations, nor any of the findings he reports, justifies discriminating against any individual because of his or her race. At best, there would only ever be a slightly larger probability that an individual will manifest any trait associated with people of the same ancestry. You’re still much better off reading the details of the résumé. Critics may dismiss as mere lip service Wade’s disclaimers about how “Racism and discrimination are wrong as a matter of principle, not of science” (7), and how the possibility of genetic advantages in certain traits “does not of course mean that Europeans are superior to others—a meaningless term in any case from an evolutionary perspective” (238).  But if Wade is secretly taking delight in the success of one race over another, it’s odd how casually he observes that “the forces of differentiation seem now to have reversed course due to increased migration, travel and intermarriage” (71).

            Wade does of course have to cite some evidence, indirect though it may be, in support of his speculations. First, he covers several genomic studies showing that, contrary to much earlier scholarship, populations of various regions of the globe are genetically distinguishable. Race, in other words, is not merely a social construct, as many have insisted. He then moves on to research suggesting that a significant portion of the human genome reveals evidence of positive selection recently enough to have affected regional populations differently. Joshua Akey’s 2009 review of multiple studies on markers of recent evolution is central to his argument. Wade interprets Akey’s report as suggesting that as much as 14 percent of the human genome shows signs of recent selection. Orr insists this is a mistake in his review, putting the number at 8 percent.

Steven Pinker, who discusses Akey’s paper in his 2011 book The Better Angels of Our Nature, likewise takes the number to be 8 and not 14. But even that lower proportion is significant. Pinker, an evolutionary psychologist, stresses just how revolutionary this finding might be.

Some journalists have uncomprehendingly lauded these results as a refutation of evolutionary psychology and what they see as its politically dangerous implication of a human nature shaped by adaptation to a hunter-gatherer lifestyle. In fact the evidence for recent selection, if it applies to genes with effects on cognition and emotion, would license a far more radical form of evolutionary psychology, one in which minds have been biologically shaped by recent environments in addition to ancient ones. And it could have the incendiary implication that aboriginal and immigrant populations are less biologically adapted to the demands of modern life than populations that have lived in literate societies for millennia. (614)

Contra critics who paint him as a crypto-supremacist, it’s quite clearly that “far more radical form of evolutionary psychology” Wade is excited about. That’s why he’s exasperated by what he sees as Pinker’s refusal to admit that the case for that form is strong enough to warrant pursuing it further owing to fear of its political ramifications. Pinker does consider much of the same evidence as Wade, but where Wade sees only clear support Pinker sees several intractable complications. Indeed, the section of Better Angels where Pinker discusses recent evolution is an important addendum to Wade’s book, and it must be noted Pinker doesn’t rule out the possibility of regional selection for social behaviors. He simply says that “for the time being, we have no need for that hypothesis” (622).

            Wade is also able to point to one gene that has already been identified whose alleles correspond to varying frequencies of violent behavior. The MAO-A gene comes in high- and low-activity varieties, and the low-activity version is more common among certain ethnic groups, like sub-Saharan Africans and Maoris. But, as Pinker points out, a majority of Chinese men also have the low-activity version of the gene, and they aren’t known for being particularly prone to violence. So the picture isn’t straightforward. Aside from the Ashkenazim, Wade cites another well-documented case in which selection for behavioral traits could have played an important role. In his book A Farewell to Alms, Gregory Clark presents an impressive collection of historical data suggesting that in the lead-up to the Industrial Revolution in England, people with personality traits that would likely have contributed to the rapid change were rewarded with more money, and people with more money had more children. The children of the wealthy would quickly overpopulate the ranks of the upper classes and thus large numbers of them inevitably descended into lower ranks. The effect of this “ratchet of wealth” (180), as Wade calls it, after multiple generations would be genes for behaviors like impulse control, patience, and thrift cascading throughout the population, priming it for the emergence of historically unprecedented institutions.

            Wade acknowledges that Clark’s theory awaits direct confirmation through the discovery of actual alleles associated with the behavioral traits he describes. But he points to experiments with artificial selection that suggest the time-scale Clark considers, about 24 generations, would have been sufficient to effect measurable changes. In his critical review, though, Johnson counters that natural selection is much slower than artificial selection, and he shows that Clark’s own numbers demonstrate a rapid attenuation of the effects of selection. Pinker points to other shortcomings in the argument, like the number of cases in which institutions changed and populations exploded in periods too short to have seen any significant change in allele frequencies. Wade isn’t swayed by any of these objections, which he takes on one-by-one, contrary to Orr’s characterization of the disagreement. As of now, the debate is ongoing. It may not be settled conclusively until scientists have a much better understanding of how genes work to influence behavior, which Wade estimates could take decades.

            Pinker is not known for being politically correct, but Wade may have a point when he accuses him of not following the evidence to the most likely conclusions. “The fact that a hypothesis is politically uncomfortable,” Pinker writes, “does not mean that it is false, but it does mean that we should consider the evidence very carefully before concluding that it is true” (614). This sentiment echoes the position taken by Johnson: Hold off going public with sensitive ideas until you’re sure they’re right. But how can we ever be sure whether an idea has any validity if we’re not willing to investigate it? Wade’s case for natural selection operating through changing institutions during recorded history isn’t entirely convincing, but neither is it completely implausible. The evidence that would settle the issue simply hasn’t been discovered yet. But neither is there any evidence in Wade’s book to support the conclusion that his interest in the topic is political as opposed to purely scientific. “Each gene under selection,” he writes, “will eventually tell a fascinating story about some historical stress to which the population was exposed and then adapted” (105). Fascinating indeed, however troubling they may be.

            Is the best way to handle troublesome issues like the possible role of genes in behavioral variations between races to declare them off-limits to scientists until the evidence is incontrovertible? Might this policy come with the risk that avoiding the topic now will make it all too easy to deny any evidence that does emerge later? If genes really do play a role in violence and impulse-control, then we may need to take that into account when we’re devising solutions to societal inequities.

Genes are not gods whose desires must be bowed to. But neither are they imaginary forces that will go away if we just ignore them. The challenge of dealing with possible biological differences also arises in the context of gender. Because women continue to earn smaller incomes on average than men and are underrepresented in science and technology fields, and because the discrepancy is thought to be the product of discrimination and sexism, many scholars argue that any research into biological factors that may explain these outcomes is merely an effort at rationalizing injustice. The problem is the evidence for biological differences in behavior between the genders is much stronger than it is for those between populations from various regions. We can ignore these findings—and perhaps even condemn the scientists who conduct the studies—because they don’t jive with our preferred explanations. But solutions based on willful ignorance have little chance of being effective.

            The sad fact is that scientists and academics have nothing even resembling a viable moral framework for discussing biological behavioral differences. Their only recourse is to deny and inveigh. The quite reasonable fear is that warnings like Wade’s about how the variations are subtle and may not exist at all in any given individual will go unheeded as the news of the findings is disseminated, and dumbed-down versions of the theories will be coopted in the service of reactionary agendas. A study reveals that women respond more readily to a baby’s vocalizations and the headlines read “Genes Make Women Better Parents.” An allele associated with violent behavior is found to be more common in African Americans and some politician cites it as evidence that the astronomical incarceration rate for black males is justifiable. But is censorship the answer? Average differences between genders in career preferences is directly relevant to any discussion of uneven representation in various fields. And it’s possible that people with a certain allele will respond differently to different types of behavioral intervention. As Carl Sagan explained, in a much different context, in his book Demon-Haunted World, “we cannot have science in bits and pieces, applying it where we feel safe and ignoring it where we feel threatened—again, because we are not wise enough to do so” (297).

            Part of the reason the public has trouble understanding what differences between varying types of people may mean is that scientists are at odds with each other about how to talk about them. And with all the righteous declamations they can start to sound a lot like the talking heads on cable news shows. Conscientious and well-intentioned scholars have so thoroughly poisoned the well when it comes to biological behavioral differences that their possible existence is treated as a moral catastrophe. How should we discuss the topic? Working to convey the importance of the distinction between average and absolute differences may be a good start. Efforts to encourage people to celebrate diversity and to challenge the equating of genes with destiny are already popularly embraced. In the realm of policy, we might shift our focus from equality of outcome to equality of opportunity. It’s all too easy to find clear examples of racial disadvantages—in housing, in schooling, in the job market—that go well beyond simple head counting at top schools and in executive boardrooms. Slight differences in behavioral propensities can’t justify such blatant instances of unfairness. Granted, that type of unfairness is much more difficult to find when it comes to gender disparities, but the lesson there is that policies and agendas based on old assumptions might need to give way to a new understanding, not that we should pretend the evidence doesn’t exist or has no meaning.

            Wade believes it was safe for him to write about race because “opposition to racism is now well entrenched” in the Western world (7). In one sense, he’s right about that. Very few people openly profess a belief in racial hierarchies. In another sense, though, it’s just as accurate to say that racism is itself well entrenched in our society. Will A Troublesome Inheritance put the brakes on efforts to bring about greater social justice? This seems unlikely if only because the publication of every Bell Curve occasions the writing of another Mismeasure of Man.

  The unfortunate result is that where you stand on the issue will become yet another badge of political identity as we form ranks on either side. Most academics will continue to consider speculation irresponsible, apply a far higher degree of scrutiny to the research, and direct the purest moral outrage they can muster, while still appearing rational and sane, at anyone who dares violate the taboo. This represents the triumph of politics over science. And it ensures the further entrenchment of views on either side of the divide.

Despite the few superficial similarities between Wade’s arguments and those of racists and eugenicists of centuries past, we have to realize that our moral condemnation of what we suppose are his invidious extra-scientific intentions is itself borne of extra-scientific ideology. Whether race plays a role in behavior is a scientific question. Our attitude toward that question and the parts of the answer that trickle in despite our best efforts at maintaining its status as taboo just may emerge out of assumptions that no longer apply. So we must recognize that succumbing to the temptation to moralize when faced with scientific disagreement automatically makes hypocrites of us all. And we should bear in mind as well that insofar as racial and gender differences really do exist it will only be through coming to a better understanding of them that we can hope to usher in a more just society for children of any and all genders and races. 

Also read: 

THE SELF-RIGHTEOUSNESS INSTINCT: STEVEN PINKER ON THE BETTER ANGELS OF MODERNITY AND THE EVILS OF MORALITY

And: 

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

And:

FROM DARWIN TO DR. SEUSS: DOUBLING DOWN ON THE DUMBEST APPROACH TO COMBATTING RACISM

Read More
Dennis Junk Dennis Junk

Lab Flies: Joshua Greene’s Moral Tribes and the Contamination of Walter White

Joshua Greene’s book “Moral Tribes” posits a dual-system theory of morality, where a quick, intuitive system 1 makes judgments based on deontological considerations—”it’s just wrong—whereas the slower, more deliberative system 2 takes time to calculate the consequences of any given choice. Audiences can see these two systems on display in the series “Breaking Bad,” as well as in critics’ and audiences’ responses.

Walter White’s Moral Math

In an episode near the end of Breaking Bad’s fourth season, the drug kingpin Gus Fring gives his meth cook Walter White an ultimatum. Walt’s brother-in-law Hank is a DEA agent who has been getting close to discovering the high-tech lab Gus has created for Walt and his partner Jesse, and Walt, despite his best efforts, hasn’t managed to put him off the trail. Gus decides that Walt himself has likewise become too big a liability, and he has found that Jesse can cook almost as well as his mentor. The only problem for Gus is that Jesse, even though he too is fed up with Walt, will refuse to cook if anything happens to his old partner. So Gus has Walt taken at gunpoint to the desert where he tells him to stay away from both the lab and Jesse. Walt, infuriated, goads Gus with the fact that he’s failed to turn Jesse against him completely, to which Gus responds, “For now,” before going on to say,

In the meantime, there’s the matter of your brother-in-law. He is a problem you promised to resolve. You have failed. Now it’s left to me to deal with him. If you try to interfere, this becomes a much simpler matter. I will kill your wife. I will kill your son. I will kill your infant daughter.

In other words, Gus tells Walt to stand by and let Hank be killed or else he will kill his wife and kids. Once he’s released, Walt immediately has his lawyer Saul Goodman place an anonymous call to the DEA to warn them that Hank is in danger. Afterward, Walt plans to pay a man to help his family escape to a new location with new, untraceable identities—but he soon discovers the money he was going to use to pay the man has already been spent (by his wife Skyler). Now it seems all five of them are doomed. This is when things get really interesting.

      Walt devises an elaborate plan to convince Jesse to help him kill Gus. Jesse knows that Gus would prefer for Walt to be dead, and both Walt and Gus know that Jesse would go berserk if anyone ever tried to hurt his girlfriend’s nine-year-old son Brock. Walt’s plan is to make it look like Gus is trying to frame him for poisoning Brock with risin. The idea is that Jesse would suspect Walt of trying to kill Brock as punishment for Jesse betraying him and going to work with Gus. But Walt will convince Jesse that this is really just Gus’s ploy to trick Jesse into doing what he has forbidden Gus to do up till now—and kill Walt himself. Once Jesse concludes that it was Gus who poisoned Brock, he will understand that his new boss has to go, and he will accept Walt’s offer to help him perform the deed. Walt will then be able to get Jesse to give him the crucial information he needs about Gus to figure out a way to kill him.

It’s a brilliant plan. The one problem is that it involves poisoning a nine-year-old child. Walt comes up with an ingenious trick which allows him to use a less deadly poison while still making it look like Brock has ingested the ricin, but for the plan to work the boy has to be made deathly ill. So Walt is faced with a dilemma: if he goes through with his plan, he can save Hank, his wife, and his two kids, but to do so he has to deceive his old partner Jesse in just about the most underhanded way imaginable—and he has to make a young boy very sick by poisoning him, with the attendant risk that something will go wrong and the boy, or someone else, or everyone else, will die anyway. The math seems easy: either four people die, or one person gets sick. The option recommended by the math is greatly complicated, however, by the fact that it involves an act of violence against an innocent child.

In the end, Walt chooses to go through with his plan, and it works perfectly. In another ingenious move, though, this time on the part of the show’s writers, Walt’s deception isn’t revealed until after his plan has been successfully implemented, which makes for an unforgettable shock at the end of the season. Unfortunately, this revelation after the fact, at a time when Walt and his family are finally safe, makes it all too easy to forget what made the dilemma so difficult in the first place—and thus makes it all too easy to condemn Walt for resorting to such monstrous means to see his way through.

            Fans of Breaking Bad who read about the famous thought-experiment called the footbridge dilemma in Harvard psychologist Joshua Greene’s multidisciplinary and momentously important book Moral Tribes: Emotion, Reason, and the Gap between Us and Them will immediately recognize the conflicting feelings underlying our responses to questions about serving some greater good by committing an act of violence. Here is how Greene describes the dilemma:

A runaway trolley is headed for five railway workmen who will be killed if it proceeds on its present course. You are standing on a footbridge spanning the tracks, in between the oncoming trolley and the five people. Next to you is a railway workman wearing a large backpack. The only way to save the five people is to push this man off the footbridge and onto the tracks below. The man will die as a result, but his body and backpack will stop the trolley from reaching the others. (You can’t jump yourself because you, without a backpack, are not big enough to stop the trolley, and there’s no time to put one on.) Is it morally acceptable to save the five people by pushing this stranger to his death? (113-4)

As was the case for Walter White when he faced his child-poisoning dilemma, the math is easy: you can save five people—strangers in this case—through a single act of violence. One of the fascinating things about common responses to the footbridge dilemma, though, is that the math is all but irrelevant to most of us; no matter how many people we might save, it’s hard for us to see past the murderous deed of pushing the man off the bridge. The answer for a large majority of people faced with this dilemma, even in the case of variations which put the number of people who would be saved much higher than five, is no, pushing the stranger to his death is not morally acceptable.

Another fascinating aspect of our responses is that they change drastically with the modification of a single detail in the hypothetical scenario. In the switch dilemma, a trolley is heading for five people again, but this time you can hit a switch to shift it onto another track where there happens to be a single person who would be killed. Though the math and the underlying logic are the same—you save five people by killing one—something about pushing a person off a bridge strikes us as far worse than pulling a switch. A large majority of people say killing the one person in the switch dilemma is acceptable. To figure out which specific factors account for the different responses, Greene and his colleagues tweak various minor details of the trolley scenario before posing the dilemma to test participants. By now, so many experiments have relied on these scenarios that Greene calls trolley dilemmas the fruit flies of the emerging field known as moral psychology.

The Automatic and Manual Modes of Moral Thinking

One hypothesis for why the footbridge case strikes us as unacceptable is that it involves using a human being as an instrument, a means to an end. So Greene and his fellow trolleyologists devised a variation called the loop dilemma, which still has participants pulling a hypothetical switch, but this time the lone victim on the alternate track must stop the trolley from looping back around onto the original track. In other words, you’re still hitting the switch to save the five people, but you’re also using a human being as a trolley stop. People nonetheless tend to respond to the loop dilemma in much the same way they do the switch dilemma. So there must be some factor other than the prospect of using a person as an instrument that makes the footbridge version so objectionable to us.

Greene’s own theory for why our intuitive responses to these dilemmas are so different begins with what Daniel Kahneman, one of the founders of behavioral economics, labeled the two-system model of the mind. The first system, a sort of autopilot, is the one we operate in most of the time. We only use the second system when doing things that require conscious effort, like multiplying 26 by 47. While system one is quick and intuitive, system two is slow and demanding. Greene proposes as an analogy the automatic and manual settings on a camera. System one is point-and-click; system two, though more flexible, requires several calibrations and adjustments. We usually only engage our manual systems when faced with circumstances that are either completely new or particularly challenging.

According to Greene’s model, our automatic settings have functions that go beyond the rapid processing of information to encompass our basic set of moral emotions, from indignation to gratitude, from guilt to outrage, which motivates us to behave in ways that over evolutionary history have helped our ancestors transcend their selfish impulses to live in cooperative societies. Greene writes,

According to the dual-process theory, there is an automatic setting that sounds the emotional alarm in response to certain kinds of harmful actions, such as the action in the footbridge case. Then there’s manual mode, which by its nature tends to think in terms of costs and benefits. Manual mode looks at all of these cases—switch, footbridge, and loop—and says “Five for one? Sounds like a good deal.” Because manual mode always reaches the same conclusion in these five-for-one cases (“Good deal!”), the trend in judgment for each of these cases is ultimately determined by the automatic setting—that is, by whether the myopic module sounds the alarm. (233)

What makes the dilemmas difficult then is that we experience them in two conflicting ways. Most of us, most of the time, follow the dictates of the automatic setting, which Greene describes as myopic because its speed and efficiency come at the cost of inflexibility and limited scope for extenuating considerations.

The reason our intuitive settings sound an alarm at the thought of pushing a man off a bridge but remain silent about hitting a switch, Greene suggests, is that our ancestors evolved to live in cooperative groups where some means of preventing violence between members had to be in place to avoid dissolution—or outright implosion. One of the dangers of living with a bunch of upright-walking apes who possess the gift of foresight is that any one of them could at any time be plotting revenge for some seemingly minor slight, or conspiring to get you killed so he can move in on your spouse or inherit your belongings. For a group composed of individuals with the capacity to hold grudges and calculate future gains to function cohesively, the members must have in place some mechanism that affords a modicum of assurance that no one will murder them in their sleep. Greene writes,

To keep one’s violent behavior in check, it would help to have some kind of internal monitor, an alarm system that says “Don’t do that!” when one is contemplating an act of violence. Such an action-plan inspector would not necessarily object to all forms of violence. It might shut down, for example, when it’s time to defend oneself or attack one’s enemies. But it would, in general, make individuals very reluctant to physically harm one another, thus protecting individuals from retaliation and, perhaps, supporting cooperation at the group level. My hypothesis is that the myopic module is precisely this action-plan inspecting system, a device for keeping us from being casually violent. (226)

Hitting a switch to transfer a train from one track to another seems acceptable, even though a person ends up being killed, because nothing our ancestors would have recognized as violence is involved.

Many philosophers cite our different responses to the various trolley dilemmas as support for deontological systems of morality—those based on the inherent rightness or wrongness of certain actions—since we intuitively know the choices suggested by a consequentialist approach are immoral. But Greene points out that this argument begs the question of how reliable our intuitions really are. He writes,

I’ve called the footbridge dilemma a moral fruit fly, and that analogy is doubly appropriate because, if I’m right, this dilemma is also a moral pest. It’s a highly contrived situation in which a prototypically violent action is guaranteed (by stipulation) to promote the greater good. The lesson that philosophers have, for the most part, drawn from this dilemma is that it’s sometimes deeply wrong to promote the greater good. However, our understanding of the dual-process moral brain suggests a different lesson: Our moral intuitions are generally sensible, but not infallible. As a result, it’s all but guaranteed that we can dream up examples that exploit the cognitive inflexibility of our moral intuitions. It’s all but guaranteed that we can dream up a hypothetical action that is actually good but that seems terribly, horribly wrong because it pushes our moral buttons. I know of no better candidate for this honor than the footbridge dilemma. (251)

The obverse is that many of the things that seem morally acceptable to us actually do cause harm to people. Greene cites the example of a man who lets a child drown because he doesn’t want to ruin his expensive shoes, which most people agree is monstrous, even though we think nothing of spending money on things we don’t really need when we could be sending that money to save sick or starving children in some distant country. Then there are crimes against the environment, which always seem to rank low on our list of priorities even though their future impact on real human lives could be devastating. We have our justifications for such actions or omissions, to be sure, but how valid are they really? Is distance really a morally relevant factor when we let children die? Does the diffusion of responsibility among so many millions change the fact that we personally could have a measurable impact?

These black marks notwithstanding, cooperation, and even a certain degree of altruism, come natural to us. To demonstrate this, Greene and his colleagues have devised some clever methods for separating test subjects’ intuitive responses from their more deliberate and effortful decisions. The experiments begin with a multi-participant exchange scenario developed by economic game theorists called the Public Goods Game, which has a number of anonymous players contribute to a common bank whose sum is then doubled and distributed evenly among them. Like the more famous game theory exchange known as the Prisoner’s Dilemma, the outcomes of the Public Goods Game reward cooperation, but only when a threshold number of fellow cooperators is reached. The flip side, however, is that any individual who decides to be stingy can get a free ride from everyone else’s contributions and make an even greater profit. What tends to happen is, over multiple rounds, the number of players opting for stinginess increases until the game is ruined for everyone, a process analogical to a phenomenon in economics known as the Tragedy of the Commons. Everyone wants to graze a few more sheep on the commons than can be sustained fairly, so eventually the grounds are left barren.

The Biological and Cultural Evolution of Morality

            Greene believes that humans evolved emotional mechanisms to prevent the various analogs of the Tragedy of the Commons from occurring so that we can live together harmoniously in tight-knit groups. The outcomes of multiple rounds of the Public Goods Game, for instance, tend to be far less dismal when players are given the opportunity to devote a portion of their own profits to punishing free riders. Most humans, it turns out, will be motivated by the emotion of anger to expend their own resources for the sake of enforcing fairness. Over several rounds, cooperation becomes the norm. Such an outcome has been replicated again and again, but researchers are always interested in factors that influence players’ strategies in the early rounds. Greene describes a series of experiments he conducted with David Rand and Martin Nowak, which were reported in an article in Nature in 2012. He writes,

…we conducted our own Public Goods Games, in which we forced some people to decide quickly (less than ten seconds) and forced others to decide slowly (more than ten seconds). As predicted, forcing people to decide faster made them more cooperative and forcing people to slow down made them less cooperative (more likely to free ride). In other experiments, we asked people, before playing the Public Goods Game, to write about a time in which their intuitions served them well, or about a time in which careful reasoning led them astray. Reflecting on the advantages of intuitive thinking (or the disadvantages of careful reflection) made people more cooperative. Likewise, reflecting on the advantages of careful reasoning (or the disadvantages of intuitive thinking) made people less cooperative. (62)

These results offer strong support for Greene’s dual-process theory of morality, and they even hint at the possibility that the manual mode is fundamentally selfish or amoral—in other words, that the philosophers have been right all along in deferring to human intuitions about right and wrong.

            As good as our intuitive moral sense is for preventing the Tragedy of the Commons, however, when given free rein in a society comprised of large groups of people who are strangers to one another, each with its own culture and priorities, our natural moral settings bring about an altogether different tragedy. Greene labels it the Tragedy of Commonsense Morality. He explains,

Morality evolved to enable cooperation, but this conclusion comes with an important caveat. Biologically speaking, humans were designed for cooperation, but only with some people. Our moral brains evolved for cooperation within groups, and perhaps only within the context of personal relationships. Our moral brains did not evolve for cooperation between groups (at least not all groups). (23)

Expanding on the story behind the Tragedy of the Commons, Greene describes what would happen if several groups, each having developed its own unique solution for making sure the commons were protected from overgrazing, were suddenly to come into contact with one another on a transformed landscape called the New Pastures. Each group would likely harbor suspicions against the others, and when it came time to negotiate a new set of rules to govern everybody the groups would all show a significant, though largely unconscious, bias in favor of their own members and their own ways.

The origins of moral psychology as a field can be traced to both developmental and evolutionary psychology. Seminal research conducted at Yale’s Infant Cognition Center, led by Karen Wynn, Kiley Hamlin, and Paul Bloom (and which Bloom describes in a charming and highly accessible book called Just Babies), has demonstrated that children as young as six months possess what we can easily recognize as a rudimentary moral sense. These findings suggest that much of the behavior we might have previously ascribed to lessons learned from adults is actually innate. Experiments based on game theory scenarios and thought-experiments like the trolley dilemmas are likewise thought to tap into evolved patterns of human behavior. Yet when University of British Columbia psychologist Joseph Henrich teamed up with several anthropologists to see how people living in various small-scale societies responded to game theory scenarios like the Prisoner’s Dilemma and the Public Goods Game they discovered a great deal of variation. On the one hand, then, human moral intuitions seem to be rooted in emotional responses present at, or at least close to, birth, but on the other hand cultures vary widely in their conventional responses to classic dilemmas. These differences between cultural conceptions of right and wrong are in large part responsible for the conflict Greene envisions in his Parable of the New Pastures.

But how can a moral sense be both innate and culturally variable?  “As you might expect,” Greene explains, “the way people play these games reflects the way they live.” People in some cultures rely much more heavily on cooperation to procure their sustenance, as is the case with the Lamelara of Indonesia, who live off the meat of whales they hunt in groups. Cultures also vary in how much they rely on market economies as opposed to less abstract and less formal modes of exchange. Just as people adjust the way they play economic games in response to other players’ moves, people acquire habits of cooperation based on the circumstances they face in their particular societies. Regarding the differences between small-scale societies in common game theory strategies, Greene writes,

Henrich and colleagues found that payoffs to cooperation and market integration explain more than two thirds of the variation across these cultures. A more recent study shows that, across societies, market integration is an excellent predictor of altruism in the Dictator Game. At the same time, many factors that you might expect to be important predictors of cooperative behavior—things like an individual’s sex, age, and relative wealth, or the amount of money at stake—have little predictive power. (72)

In much the same way humans are programmed to learn a language and acquire a set of religious beliefs, they also come into the world with a suite of emotional mechanisms that make up the raw material for what will become a culturally calibrated set of moral intuitions. The specific language and religion we end up with is of course dependent on the social context of our upbringing, just as our specific moral settings will reflect those of other people in the societies we grow up in.

Jonathan Haidt and Tribal Righteousness

In our modern industrial society, we actually have some degree of choice when it comes to our cultural affiliations, and this freedom opens the way for heritable differences between individuals to play a larger role in our moral development. Such differences are nowhere as apparent as in the realm of politics, where nearly all citizens occupy some point on a continuum between conservative and liberal. According to Greene’s fellow moral psychologist Jonathan Haidt, we have precious little control over our moral responses because, in his view, reason only comes into play to justify actions and judgments we’ve already made. In his fascinating 2012 book The Righteous Mind, Haidt insists,

Moral reasoning is part of our lifelong struggle to win friends and influence people. That’s why I say that “intuitions come first, strategic reasoning second.” You’ll misunderstand moral reasoning if you think about it as something people do by themselves in order to figure out the truth. (50)

To explain the moral divide between right and left, Haidt points to the findings of his own research on what he calls Moral Foundations, six dimensions underlying our intuitions about moral and immoral actions. Conservatives tend to endorse judgments based on all six of the foundations, valuing loyalty, authority, and sanctity much more than liberals, who focus more exclusively on care for the disadvantaged, fairness, and freedom from oppression. Since our politics emerge from our moral intuitions and reason merely serves as a sort of PR agent to rationalize judgments after the fact, Haidt enjoins us to be more accepting of rival political groups— after all, you can’t reason with them. 

            Greene objects both to Haidt’s Moral Foundations theory and to his prescription for a politics of complementarity. The responses to questions representing all the moral dimensions in Haidt’s studies form two clusters on a graph, Greene points out, not six, suggesting that the differences between conservatives and liberals are attributable to some single overarching variable as opposed to several individual tendencies. Furthermore, the specific content of the questions Haidt uses to flesh out the values of his respondents have a critical limitation. Greene writes,

According to Haidt, American social conservatives place greater value on respect for authority, and that’s true in a sense. Social conservatives feel less comfortable slapping their fathers, even as a joke, and so on. But social conservatives do not respect authority in a general way. Rather, they have great respect for authorities recognized by their tribe (from the Christian God to various religious and political leaders to parents). American social conservatives are not especially respectful of Barack Hussein Obama, whose status as a native-born American, and thus a legitimate president, they have persistently challenged. (339)

The same limitation applies to the loyalty and sanctity foundations. Conservatives feel little loyalty toward the atheists and Muslims among their fellow Americans. Nor do they recognize the sanctity of Mosques or Hindu holy texts. Greene goes on,

American social conservatives are not best described as people who place special value on authority, sanctity, and loyalty, but rather as tribal loyalists—loyal to their own authorities, their own religion, and themselves. This doesn’t make them evil, but it does make them parochial, tribal. In this they’re akin to the world’s other socially conservative tribes, from the Taliban in Afghanistan to European nationalists. According to Haidt, liberals should be more open to compromise with social conservatives. I disagree. In the short term, compromise may be necessary, but in the long term, our strategy should not be to compromise with tribal moralists, but rather to persuade them to be less tribalistic. (340)

Greene believes such persuasion is possible, even with regard to emotionally and morally charged controversies, because he sees our manual-mode thinking as playing a potentially much greater role than Haidt sees it playing.

Metamorality on the New Pastures

            Throughout The Righteous Mind, Haidt argues that the moral philosophers who laid the foundations of modern liberal and academic conceptions of right and wrong gave short shrift to emotions and intuitions—that they gave far too much credit to our capacity for reason. To be fair, Haidt does honor the distinction between descriptive and prescriptive theories of morality, but he nonetheless gives the impression that he considers liberal morality to be somewhat impoverished. Greene sees this attitude as thoroughly wrongheaded. Responding to Haidt’s metaphor comparing his Moral Foundations to taste buds—with the implication that the liberal palate is more limited in the range of flavors it can appreciate—Greene writes,

The great philosophers of the Enlightenment wrote at a time when the world was rapidly shrinking, forcing them to wonder whether their own laws, their own traditions, and their own God(s) were any better than anyone else’s. They wrote at a time when technology (e.g., ships) and consequent economic productivity (e.g., global trade) put wealth and power into the hands of a rising educated class, with incentives to question the traditional authorities of king and church. Finally, at this time, natural science was making the world comprehensible in secular terms, revealing universal natural laws and overturning ancient religious doctrines. Philosophers wondered whether there might also be universal moral laws, ones that, like Newton’s law of gravitation, applied to members of all tribes, whether or not they knew it. Thus, the Enlightenment philosophers were not arbitrarily shedding moral taste buds. They were looking for deeper, universal moral truths, and for good reason. They were looking for moral truths beyond the teachings of any particular religion and beyond the will of any earthly king. They were looking for what I’ve called a metamorality: a pan-tribal, or post-tribal, philosophy to govern life on the new pastures. (338-9)

While Haidt insists we must recognize the centrality of intuitions even in this civilization nominally ruled by reason, Greene points out that it was skepticism of old, seemingly unassailable and intuitive truths that opened up the world and made modern industrial civilization possible in the first place.  

            As Haidt explains, though, conservative morality serves people well in certain regards. Christian churches, for instance, encourage charity and foster a sense of community few secular institutions can match. But these advantages at the level of parochial groups have to be weighed against the problems tribalism inevitably leads to at higher levels. This is, in fact, precisely the point Greene created his Parable of the New Pastures to make. He writes,

The Tragedy of the Commons is averted by a suite of automatic settings—moral emotions that motivate and stabilize cooperation within limited groups. But the Tragedy of Commonsense Morality arises because of automatic settings, because different tribes have different automatic settings, causing them to see the world through different moral lenses. The Tragedy of the Commons is a tragedy of selfishness, but the Tragedy of Commonsense Morality is a tragedy of moral inflexibility. There is strife on the new pastures not because herders are hopelessly selfish, immoral, or amoral, but because they cannot step outside their respective moral perspectives. How should they think? The answer is now obvious: They should shift into manual mode. (172)

Greene argues that whenever we, as a society, are faced with a moral controversy—as with issues like abortion, capital punishment, and tax policy—our intuitions will not suffice because our intuitions are the very basis of our disagreement.

            Watching the conclusion to season four of Breaking Bad, most viewers probably responded to finding out that Walt had poisoned Brock by thinking that he’d become a monster—at least at first. Indeed, the currently dominant academic approach to art criticism involves taking a stance, both moral and political, with regard to a work’s models and messages. Writing for The New Yorker, Emily Nussbaum, for instance, disparages viewers of Breaking Bad for failing to condemn Walt, writing,

When Brock was near death in the I.C.U., I spent hours arguing with friends about who was responsible. To my surprise, some of the most hard-nosed cynics thought it inconceivable that it could be Walt—that might make the show impossible to take, they said. But, of course, it did nothing of the sort. Once the truth came out, and Brock recovered, I read posts insisting that Walt was so discerning, so careful with the dosage, that Brock could never have died. The audience has been trained by cable television to react this way: to hate the nagging wives, the dumb civilians, who might sour the fun of masculine adventure. “Breaking Bad” increases that cognitive dissonance, turning some viewers into not merely fans but enablers. (83)

To arrive at such an assessment, Nussbaum must reduce the show to the impact she assumes it will have on less sophisticated fans’ attitudes and judgments. But the really troubling aspect of this type of criticism is that it encourages scholars and critics to indulge their impulse toward self-righteousness when faced with challenging moral dilemmas; in other words, it encourages them to give voice to their automatic modes precisely when they should be shifting to manual mode. Thus, Nussbaum neglects outright the very details that make Walt’s scenario compelling, completely forgetting that by making Brock sick—and, yes, risking his life—he was able to save Hank, Skyler, and his own two children.

But how should we go about arriving at a resolution to moral dilemmas and political controversies if we agree we can’t always trust our intuitions? Greene believes that, while our automatic modes recognize certain acts as wrong and certain others as a matter of duty to perform, in keeping with deontological ethics, whenever we switch to manual mode, the focus shifts to weighing the relative desirability of each option's outcomes. In other words, manual mode thinking is consequentialist. And, since we tend to assess outcomes according their impact on other people, favoring those that improve the quality of their experiences the most, or detract from it the least, Greene argues that whenever we slow down and think through moral dilemmas deliberately we become utilitarians. He writes,

If I’m right, this convergence between what seems like the right moral philosophy (from a certain perspective) and what seems like the right moral psychology (from a certain perspective) is no accident. If I’m right, Bentham and Mill did something fundamentally different from all of their predecessors, both philosophically and psychologically. They transcended the limitations of commonsense morality by turning the problem of morality (almost) entirely over to manual mode. They put aside their inflexible automatic settings and instead asked two very abstract questions. First: What really matters? Second: What is the essence of morality? They concluded that experience is what ultimately matters, and that impartiality is the essence of morality. Combing these two ideas, we get utilitarianism: We should maximize the quality of our experience, giving equal weight to the experience of each person. (173)

If you cite an authority recognized only by your own tribe—say, the Bible—in support of a moral argument, then members of other tribes will either simply discount your position or counter it with pronouncements by their own authorities. If, on the other hand, you argue for a law or a policy by citing evidence that implementing it would mean longer, healthier, happier lives for the citizens it affects, then only those seeking to establish the dominion of their own tribe can discount your position (which of course isn’t to say they can’t offer rival interpretations of your evidence).

            If we turn commonsense morality on its head and evaluate the consequences of giving our intuitions priority over utilitarian accounting, we can find countless circumstances in which being overly moral is to everyone’s detriment. Ideas of justice and fairness allow far too much space for selfish and tribal biases, whereas the math behind mutually optimal outcomes based on compromise tends to be harder to fudge. Greene reports, for instance, the findings of a series of experiments conducted by Fieke Harinick and colleagues at the University of Amsterdam in 2000. Negotiations by lawyers representing either the prosecution or the defense were told to either focus on serving justice or on getting the best outcome for their clients. The negotiations in the first condition almost always came to loggerheads. Greene explains,

Thus, two selfish and rational negotiators who see that their positions are symmetrical will be willing to enlarge the pie, and then split the pie evenly. However, if negotiators are seeking justice, rather than merely looking out for their bottom lines, then other, more ambiguous, considerations come into play, and with them the opportunity for biased fairness. Maybe your clients really deserve lighter penalties. Or maybe the defendants you’re prosecuting really deserve stiffer penalties. There is a range of plausible views about what’s truly fair in these cases, and you can choose among them to suit your interests. By contrast, if it’s just a matter of getting the best deal you can from someone who’s just trying to get the best deal for himself, there’s a lot less wiggle room, and a lot less opportunity for biased fairness to create an impasse. (88)

Framing an argument as an effort to establish who was right and who was wrong is like drawing a line in the sand—it activates tribal attitudes pitting us against them, while treating negotiations more like an economic exchange circumvents these tribal biases.

Challenges to Utilitarianism

But do we really want to suppress our automatic moral reactions in favor of deliberative accountings of the greatest good for the greatest number? Deontologists have posed some effective challenges to utilitarianism in the form of thought-experiments that seem to show efforts to improve the quality of experiences would lead to atrocities. For instance, Greene recounts how in a high school debate, he was confronted by a hypothetical surgeon who could save five sick people by killing one healthy one. Then there’s the so-called Utility Monster, who experiences such happiness when eating humans that it quantitatively outweighs the suffering of those being eaten. More down-to-earth examples feature a scapegoat convicted of a crime to prevent rioting by people who are angry about police ineptitude, and the use of torture to extract information from a prisoner that could prevent a terrorist attack. The most influential challenge to utilitarianism, however, was leveled by the political philosopher John Rawls when he pointed out that it could be used to justify the enslavement of a minority by a majority.

Greene’s responses to these criticisms make up one of the most surprising, important, and fascinating parts of Moral Tribes. First, highly contrived thought-experiments about Utility Monsters and circumstances in which pushing a guy off a bridge is guaranteed to stop a trolley may indeed prove that utilitarianism is not true in any absolute sense. But whether or not such moral absolutes even exist is a contentious issue in its own right. Greene explains,

I am not claiming that utilitarianism is the absolute moral truth. Instead I’m claiming that it’s a good metamorality, a good standard for resolving moral disagreements in the real world. As long as utilitarianism doesn’t endorse things like slavery in the real world, that’s good enough. (275-6)

One source of confusion regarding the slavery issue is the equation of happiness with money; slave owners probably could make profits in excess of the losses sustained by the slaves. But money is often a poor index of happiness. Greene underscores this point by asking us to consider how much someone would have to pay us to sell ourselves into slavery. “In the real world,” he writes, “oppression offers only modest gains in happiness to the oppressors while heaping misery upon the oppressed” (284-5).

            Another failing of the thought-experiments thought to undermine utilitarianism is the shortsightedness of the supposedly obvious responses. The crimes of the murderous doctor and the scapegoating law officers may indeed produce short-term increases in happiness, but if the secret gets out healthy and innocent people will live in fear, knowing they can’t trust doctors and law officers. The same logic applies to the objection that utilitarianism would force us to become infinitely charitable, since we can almost always afford to be more generous than we currently are. But how long could we serve as so-called happiness pumps before burning out, becoming miserable, and thus lose the capacity for making anyone else happier? Greene writes,

If what utilitarianism asks of you seems absurd, then it’s not what utilitarianism actually asks of you. Utilitarianism is, once again, an inherently practical philosophy, and there’s nothing more impractical than commanding free people to do things that strike them as absurd and that run counter to their most basic motivations. Thus, in the real world, utilitarianism is demanding, but not overly demanding. It can accommodate our basic human needs and motivations, but it nonetheless calls for substantial reform of our selfish habits. (258)

Greene seems to be endorsing what philosophers call "rule utilitarianism." We can approach every choice by calculating the likely outcomes, but as a society we would be better served deciding on some rules for everyone to adhere to. It just may be possible for a doctor to increase happiness through murder in a particular set of circumstances—but most people would vociferously object to a rule legitimizing the practice.

The concept of human rights may present another challenge to Greene in his championing of consequentialism over deontology. It is our duty, after all, to recognize the rights of every human, and we ourselves have no right to disregard someone else’s rights no matter what benefit we believe might result from doing so. In his book The Better Angels of our Nature, Steven Pinker, Greene’s colleague at Harvard, attributes much of the steep decline in rates of violent death over the past three centuries to a series of what he calls Rights Revolutions, the first of which began during the Enlightenment. But the problem with arguments that refer to rights, Greene explains, is that controversies arise for the very reason that people don’t agree which rights we should recognize. He writes,

Thus, appeals to “rights” function as an intellectual free pass, a trump card that renders evidence irrelevant. Whatever you and your fellow tribespeople feel, you can always posit the existence of a right that corresponds to your feelings. If you feel that abortion is wrong, you can talk about a “right to life.” If you feel that outlawing abortion is wrong, you can talk about a “right to choose.” If you’re Iran, you can talk about your “nuclear rights,” and if you’re Israel you can talk about your “right to self-defense.” “Rights” are nothing short of brilliant. They allow us to rationalize our gut feelings without doing any additional work. (302)

The only way to resolve controversies over which rights we should actually recognize and which rights we should prioritize over others, Greene argues, is to apply utilitarian reasoning.

Ideology and Epistemology

            In his discussion of the proper use of the language of rights, Greene comes closer than in other section of Moral Tribes to explicitly articulating what strikes me as the most revolutionary idea that he and his fellow moral psychologists are suggesting—albeit as of yet only implicitly. In his advocacy for what he calls “deep pragmatism,” Greene isn’t merely applying evolutionary theories to an old philosophical debate; he’s actually posing a subtly different question. The numerous thought-experiments philosophers use to poke holes in utilitarianism may not have much relevance in the real world—but they do undermine any claim utilitarianism may have on absolute moral truth. Greene’s approach is therefore to eschew any effort to arrive at absolute truths, including truths pertaining to rights. Instead, in much the same way scientists accept that our knowledge of the natural world is mediated by theories, which only approach the truth asymptotically, never capturing it with any finality, Greene intimates that the important task in the realm of moral philosophy isn’t to arrive at a full accounting of moral truths but rather to establish a process for resolving moral and political dilemmas.

What’s needed, in other words, isn’t a rock solid moral ideology but a workable moral epistemology. And, just as empiricism serves as the foundation of the epistemology of science, Greene makes a convincing case that we could use utilitarianism as the basis of an epistemology of morality. Pursuing the analogy between scientific and moral epistemologies even farther, we can compare theories, which stand or fall according to their empirical support, to individual human rights, which we afford and affirm according to their impact on the collective happiness of every individual in the society. Greene writes,

If we are truly interested in persuading our opponents with reason, then we should eschew the language of rights. This is, once again, because we have no non-question-begging (and utilitarian) way of figuring out which rights really exist and which rights take precedence over others. But when it’s not worth arguing—either because the question has been settled or because our opponents can’t be reasoned with—then it’s time to start rallying the troops. It’s time to affirm our moral commitments, not with wonky estimates of probabilities but with words that stir our souls. (308-9)

Rights may be the closest thing we have to moral truths, just as theories serve as our stand-ins for truths about the natural world, but even more important than rights or theories are the processes we rely on to establish and revise them.

A New Theory of Narrative

            As if a philosophical revolution weren’t enough, moral psychology is also putting in place what could be the foundation of a new understanding of the role of narratives in human lives. At the heart of every story is a conflict between competing moral ideals. In commercial fiction, there tends to be a character representing each side of the conflict, and audiences can be counted on to favor one side over the other—the good, altruistic guys over the bad, selfish guys. In more literary fiction, on the other hand, individual characters are faced with dilemmas pitting various modes of moral thinking against each other. In season one of Breaking Bad, for instance, Walter White famously writes a list on a notepad of the pros and cons of murdering the drug dealer restrained in Jesse’s basement. Everyone, including Walt, feels that killing the man is wrong, but if they let him go Walt and his family will at risk of retaliation. This dilemma is in fact quite similar to ones he faces in each of the following seasons, right up until he has to decide whether or not to poison Brock. Trying to work out what Walt should do, and anxiously anticipating what he will do, are mental exercises few can help engaging in as they watch the show.

            The current reigning conception of narrative in academia explains the appeal of stories by suggesting it derives from their tendency to fulfill conscious and unconscious desires, most troublesomely our desires to have our prejudices reinforced. We like to see men behaving in ways stereotypically male, women stereotypically female, minorities stereotypically black or Hispanic, and so on. Cultural products like works of art, and even scientific findings, are embraced, the reasoning goes, because they cement the hegemony of various dominant categories of people within the society. This tradition in arts scholarship and criticism can in large part be traced back to psychoanalysis, but it has developed over the last century to incorporate both the predominant view of language in the humanities and the cultural determinism espoused by many scholars in the service of various forms of identity politics. 

            The postmodern ideology that emerged from the convergence of these schools is marked by a suspicion that science is often little more than a veiled effort at buttressing the political status quo, and its preeminent thinkers deliberately set themselves apart from the humanist and Enlightenment traditions that held sway in academia until the middle of the last century by writing in byzantine, incoherent prose. Even though there could be no rational way either to support or challenge postmodern ideas, scholars still take them as cause for leveling accusations against both scientists and storytellers of using their work to further reactionary agendas.

For anyone who recognizes the unparalleled power of science both to advance our understanding of the natural world and to improve the conditions of human lives, postmodernism stands out as a catastrophic wrong turn, not just in academic history but in the moral evolution of our civilization. The equation of narrative with fantasy is a bizarre fantasy in its own right. Attempting to explain the appeal of a show like Breaking Bad by suggesting that viewers have an unconscious urge to be diagnosed with cancer and to subsequently become drug manufacturers is symptomatic of intractable institutional delusion. And, as Pinker recounts in Better Angels, literature, and novels in particular, were likely instrumental in bringing about the shift in consciousness toward greater compassion for greater numbers of people that resulted in the unprecedented decline in violence beginning in the second half of the nineteenth century.

Yet, when it comes to arts scholarship, postmodernism is just about the only game in town. Granted, the writing in this tradition has progressed a great deal toward greater clarity, but the focus on identity politics has intensified to the point of hysteria: you’d be hard-pressed to find a major literary figure who hasn’t been accused of misogyny at one point or another, and any scientist who dares study something like gender differences can count on having her motives questioned and her work lampooned by well intentioned, well indoctrinated squads of well-poisoning liberal wags.

            When Emily Nussbaum complains about viewers of Breaking Bad being lulled by the “masculine adventure” and the digs against “nagging wives” into becoming enablers of Walt’s bad behavior, she’s doing exactly what so many of us were taught to do in academic courses on literary and film criticism, applying a postmodern feminist ideology to the show—and completely missing the point. As the series opens, Walt is deliberately portrayed as downtrodden and frustrated, and Skyler’s bullying is an important part of that dynamic. But the pleasure of watching the show doesn’t come so much from seeing Walt get out from under Skyler’s thumb—he never really does—as it does from anticipating and fretting over how far Walt will go in his criminality, goaded on by all that pent-up frustration. Walt shows a similar concern for himself, worrying over what effect his exploits will have on who he is and how he will be remembered. We see this in season three when he becomes obsessed by the “contamination” of his lab—which turns out to be no more than a house fly—and at several other points as well. Viewers are not concerned with Walt because he serves as an avatar acting out their fantasies (or else the show would have a lot more nude scenes with the principal of the school he teaches in). They’re concerned because, at least at the beginning of the show, he seems to be a good person and they can sympathize with his tribulations.

The much more solidly grounded view of narrative inspired by moral psychology suggests that common themes in fiction are not reflections or reinforcements of some dominant culture, but rather emerge from aspects of our universal human psychology. Our feelings about characters, according to this view, aren’t determined by how well they coincide with our abstract prejudices; on the contrary, we favor the types of people in fiction we would favor in real life. Indeed, if the story is any good, we will have to remind ourselves that the people whose lives we’re tracking really are fictional. Greene doesn’t explore the intersection between moral psychology and narrative in Moral Tribes, but he does give a nod to what we can only hope will be a burgeoning field when he writes, 

Nowhere is our concern for how others treat others more apparent than in our intense engagement with fiction. Were we purely selfish, we wouldn’t pay good money to hear a made-up story about a ragtag group of orphans who use their street smarts and quirky talents to outfox a criminal gang. We find stories about imaginary heroes and villains engrossing because they engage our social emotions, the ones that guide our reactions to real-life cooperators and rogues. We are not disinterested parties. (59)

Many people ask why we care so much about people who aren’t even real, but we only ever reflect on the fact that what we’re reading or viewing is a simulation when we’re not sufficiently engrossed by it.

            Was Nussbaum completely wrong to insist that Walt went beyond the pale when he poisoned Brock? She certainly showed the type of myopia Greene attributes to the automatic mode by forgetting Walt saved at least four lives by endangering the one. But most viewers probably had a similar reaction. The trouble wasn’t that she was appalled; it was that her postmodernism encouraged her to unquestioningly embrace and give voice to her initial feelings. Greene writes, 

It’s good that we’re alarmed by acts of violence. But the automatic emotional gizmos in our brains are not infinitely wise. Our moral alarm systems think that the difference between pushing and hitting a switch is of great moral significance. More important, our moral alarm systems see no difference between self-serving murder and saving a million lives at the cost of one. It’s a mistake to grant these gizmos veto power in our search for a universal moral philosophy. (253)

Greene doesn’t reveal whether he’s a Breaking Bad fan or not, but his discussion of the footbridge dilemma gives readers a good indication of how he’d respond to Walt’s actions.

If you don’t feel that it’s wrong to push the man off the footbridge, there’s something wrong with you. I, too, feel that it’s wrong, and I doubt that I could actually bring myself to push, and I’m glad that I’m like this. What’s more, in the real world, not pushing would almost certainly be the right decision. But if someone with the best of intentions were to muster the will to push the man off the footbridge, knowing for sure that it would save five lives, and knowing for sure that there was no better alternative, I would approve of this action, although I might be suspicious of the person who chose to perform it. (251)

The overarching theme of Breaking Bad, which Nussbaum fails utterly to comprehend, is the transformation of a good man into a bad one. When he poisons Brock, we’re glad he succeeded in saving his family, some of us are even okay with methods, but we’re worried—suspicious even—about what his ability to go through with it says about him. Over the course of the series, we’ve found ourselves rooting for Walt, and we’ve come to really like him. We don’t want to see him break too bad. And however bad he does break we can’t help hoping for his redemption. Since he’s a valuable member of our tribe, we’re loath to even consider it might be time to start thinking he has to go.

Also read:

The Criminal Sublime: Walter White's Brutally Plausible Journey to the Heart of Darkness in Breaking Bad

And

LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME

And

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

Read More
Dennis Junk Dennis Junk

Capuchin-22: A Review of “The Bonobo and the Atheist: In Search of Humanism among the Primates” by Frans De Waal

Frans de Waal’s work is always a joy to read, insightful, surprising, and superbly humane. Unfortunately, in his mostly wonderful book, “The Bonobo and the Atheist,” he carts out a familiar series of straw men to level an attack on modern critics of religion—with whom, if he’d been more diligent in reading their work, he’d find much common ground with.

            Whenever literary folk talk about voice, that supposedly ineffable but transcendently important quality of narration, they display an exasperating penchant for vagueness, as if so lofty a dimension to so lofty an endeavor couldn’t withstand being spoken of directly—or as if they took delight in instilling panic and self-doubt into the quivering hearts of aspiring authors. What the folk who actually know what they mean by voice actually mean by it is all the idiosyncratic elements of prose that give readers a stark and persuasive impression of the narrator as a character. Discussions of what makes for stark and persuasive characters, on the other hand, are vague by necessity. It must be noted that many characters even outside of fiction are neither. As a first step toward developing a feel for how character can be conveyed through writing, we may consider the nonfiction work of real people with real character, ones who also happen to be practiced authors.

The Dutch-American primatologist Frans de Waal is one such real-life character, and his prose stands as testament to the power of written language, lonely ink on colorless pages, not only to impart information, but to communicate personality and to make a contagion of states and traits like enthusiasm, vanity, fellow-feeling, bluster, big-heartedness, impatience, and an abiding wonder. De Waal is a writer with voice. Many other scientists and science writers explore this dimension to prose in their attempts to engage readers, but few avoid the traps of being goofy or obnoxious instead of funny—a trap David Pogue, for instance, falls into routinely as he hosts NOVA on PBS—and of expending far too much effort in their attempts at being distinctive, thus failing to achieve anything resembling grace. 

The most striking quality of de Waal’s writing, however, isn’t that its good-humored quirkiness never seems strained or contrived, but that it never strays far from the man’s own obsession with getting at the stories behind the behaviors he so minutely observes—whether the characters are his fellow humans or his fellow primates, or even such seemingly unstoried creatures as rats or turtles. But to say that de Waal is an animal lover doesn’t quite capture the essence of what can only be described as a compulsive fascination marked by conviction—the conviction that when he peers into the eyes of a creature others might dismiss as an automaton, a bundle of twitching flesh powered by preprogrammed instinct, he sees something quite different, something much closer to the workings of his own mind and those of his fellow humans.

De Waal’s latest book, The Bonobo and the Atheist: In Search of Humanism among the Primates, reprises the main themes of his previous books, most centrally the continuity between humans and other primates, with an eye toward answering the questions of where does, and where should morality come from. Whereas in his books from the years leading up to the turn of the century he again and again had to challenge what he calls “veneer theory,” the notion that without a process of socialization that imposes rules on individuals from some outside source they’d all be greedy and selfish monsters, de Waal has noticed over the past six or so years a marked shift in the zeitgeist toward an awareness of our more cooperative and even altruistic animal urgings. Noting a sharp difference over the decades in how audiences at his lectures respond to recitations of the infamous quote by biologist Michael Ghiselin, “Scratch an altruist and watch a hypocrite bleed,” de Waal writes,

Although I have featured this cynical line for decades in my lectures, it is only since about 2005 that audiences greet it with audible gasps and guffaws as something so outrageous, so out of touch with how they see themselves, that they can’t believe it was ever taken seriously. Had the author never had a friend? A loving wife? Or a dog, for that matter? (43)

The assumption underlying veneer theory was that without civilizing influences humans’ deeper animal impulses would express themselves unchecked. The further assumption was that animals, the end products of the ruthless, eons-long battle for survival and reproduction, would reflect the ruthlessness of that battle in their behavior. De Waal’s first book, Chimpanzee Politics, which told the story of a period of intensified competition among the captive male chimps at the Arnhem Zoo for alpha status, with all the associated perks like first dibs on choice cuisine and sexually receptive females, was actually seen by many as lending credence to these assumptions. But de Waal himself was far from convinced that the primates he studied were invariably, or even predominantly, violent and selfish.

            What he observed at the zoo in Arnhem was far from the chaotic and bloody free-for-all it would have been if the chimps took the kind of delight in violence for its own sake that many people imagine them being disposed to. As he pointed out in his second book, Peacemaking among Primates, the violence is almost invariably attended by obvious signs of anxiety on the part of those participating in it, and the tension surrounding any major conflict quickly spreads throughout the entire community. The hierarchy itself is in fact an adaptation that serves as a check on the incessant conflict that would ensue if the relative status of each individual had to be worked out anew every time one chimp encountered another. “Tightly embedded in society,” he writes in The Bonobo and the Atheist, “they respect the limits it puts on their behavior and are ready to rock the boat only if they can get away with it or if so much is at stake that it’s worth the risk” (154). But the most remarkable thing de Waal observed came in the wake of the fights that couldn’t successfully be avoided. Chimps, along with primates of several other species, reliably make reconciliatory overtures toward one another after they’ve come to blows—and bites and scratches. In light of such reconciliations, primate violence begins to look like a momentary, albeit potentially dangerous, readjustment to a regularly peaceful social order rather than any ongoing melee, as individuals with increasing or waning strength negotiate a stable new arrangement.

            Part of the enchantment of de Waal’s writing is his judicious and deft balancing of anecdotes about the primates he works with on the one hand and descriptions of controlled studies he and his fellow researchers conduct on the other. In The Bonobo and the Atheist, he strikes a more personal note than he has in any of his previous books, at points stretching the bounds of the popular science genre and crossing into the realm of memoir. This attempt at peeling back the surface of that other veneer, the white-coated scientist’s posture of mechanistic objectivity and impassive empiricism, works best when de Waal is merging tales of his animal experiences with reports on the research that ultimately provides evidence for what was originally no more than an intuition. Discussing a recent, and to most people somewhat startling, experiment pitting the social against the alimentary preferences of a distant mammalian cousin, he recounts,

Despite the bad reputation of these animals, I have no trouble relating to its findings, having kept rats as pets during my college years. Not that they helped me become popular with the girls, but they taught me that rats are clean, smart, and affectionate. In an experiment at the University of Chicago, a rat was placed in an enclosure where it encountered a transparent container with another rat. This rat was locked up, wriggling in distress. Not only did the first rat learn how to open a little door to liberate the second, but its motivation to do so was astonishing. Faced with a choice between two containers, one with chocolate chips and another with a trapped companion, it often rescued its companion first. (142-3)

This experiment, conducted by Inbal Ben-Ami Bartal, Jean Decety, and Peggy Mason, actually got a lot of media coverage; Mason was even interviewed for an episode of NOVA Science NOW where you can watch a video of the rats performing the jailbreak and sharing the chocolate (and you can also see David Pogue being obnoxious.) This type of coverage has probably played a role in the shift in public opinion regarding the altruistic propensities of humans and animals. But if there’s one species who’s behavior can be said to have undermined the cynicism underlying veneer theory—aside from our best friend the dog of course—it would have to be de Waal’s leading character, the bonobo.

            De Waal’s 1997 book Bonobo: The Forgotten Ape, on which he collaborated with photographer Frans Lanting, introduced this charismatic, peace-loving, sex-loving primate to the masses, and in the process provided behavioral scientists with a new model for what our own ancestors’ social lives might have looked like. Bonobo females dominate the males to the point where zoos have learned never to import a strange male into a new community without the protection of his mother. But for the most part any tensions, even those over food, even those between members of neighboring groups, are resolved through genito-genital rubbing—a behavior that looks an awful lot like sex and often culminates in vocalizations and facial expressions that resemble those of humans experiencing orgasms to a remarkable degree. The implications of bonobos’ hippy-like habits have even reached into politics. After an uncharacteristically ill-researched and ill-reasoned article in the New Yorker by Ian Parker which suggested that the apes weren’t as peaceful and erotic as we’d been led to believe, conservatives couldn’t help celebrating. De Waal writes in The Bonobo and the Atheist,

Given that this ape’s reputation has been a thorn in the side of homophobes as well as Hobbesians, the right-wing media jumped with delight. The bonobo “myth” could finally be put to rest, and nature remain red in tooth and claw. The conservative commentator Dinesh D’Souza accused “liberals” of having fashioned the bonobo into their mascot, and he urged them to stick with the donkey. (63)

But most primate researchers think the behavioral differences between chimps and bonobos are pretty obvious. De Waal points out that while violence does occur among the apes on rare occasions “there are no confirmed reports of lethal aggression among bonobos” (63). Chimps, on the other hand, have been observed doing all kinds of killing. Bonobos also outperform chimps in experiments designed to test their capacity for cooperation, as in the setup that requires two individuals to pull on a rope at the same time in order for either of them to get ahold of food placed atop a plank of wood. (Incidentally, the New Yorker’s track record when it comes to anthropology is suspiciously checkered—disgraced author Patrick Tierney’s discredited book on Napoleon Chagnon, for instance, was originally excerpted in the magazine.)

            Bonobos came late to the scientific discussion of what ape behavior can tell us about our evolutionary history. The famous chimp researcher Robert Yerkes, whose name graces the facility de Waal currently directs at Emory University in Atlanta, actually wrote an entire book called Almost Human about what he believed was a rather remarkable chimp. A photograph from that period reveals that it wasn’t a chimp at all. It was a bonobo. Now, as this species is becoming better researched, and with the discovery of fossils like the 4.4 million-year-old Ardipethicus ramidus known as Ardi, a bipedal ape with fangs that are quite small when compared to the lethal daggers sported by chimps, the role of violence in our ancestry is ever more uncertain. De Waal writes,

What if we descend not from a blustering chimp-like ancestor but from a gentle, empathic bonobo-like ape? The bonobo’s body proportions—its long legs and narrow shoulders—seem to perfectly fit the descriptions of Ardi, as do its relatively small canines. Why was the bonobo overlooked? What if the chimpanzee, instead of being an ancestral prototype, is in fact a violent outlier in an otherwise relatively peaceful lineage? Ardi is telling us something, and there may exist little agreement about what she is saying, but I hear a refreshing halt to the drums of war that have accompanied all previous scenarios. (61)

De Waal is well aware of all the behaviors humans engage in that are more emblematic of chimps than of bonobos—in his 2005 book Our Inner Ape, he refers to humans as “the bipolar ape”—but the fact that our genetic relatedness to both species is exactly the same, along with the fact that chimps also have a surprising capacity for peacemaking and empathy, suggest to him that evolution has had plenty of time and plenty of raw material to instill in us the emotional underpinnings of a morality that emerges naturally—without having to be imposed by religion or philosophy. “Rather than having developed morality from scratch through rational reflection,” he writes in The Bonobo and the Atheist, “we received a huge push in the rear from our background as social animals" (17).

            In the eighth and final chapter of The Bonobo and the Atheist, titled “Bottom-Up Morality,” de Waal describes what he believes is an alternative to top-down theories that attempt to derive morals from religion on the one hand and from reason on the other. Invisible beings threatening eternal punishment can frighten us into doing the right thing, and principles of fairness might offer slight nudges in the direction of proper comportment, but we must already have some intuitive sense of right and wrong for either of these belief systems to operate on if they’re to be at all compelling. Many people assume moral intuitions are inculcated in childhood, but experiments like the one that showed rats will come to the aid of distressed companions suggest something deeper, something more ingrained, is involved. De Waal has found that a video of capuchin monkeys demonstrating "inequity aversion"—a natural, intuitive sense of fairness—does a much better job than any charts or graphs at getting past the prejudices of philosophers and economists who want to insist that fairness is too complex a principle for mere monkeys to comprehend. He writes,

This became an immensely popular experiment in which one monkey received cucumber slices while another received grapes for the same task. The monkeys had no trouble performing if both received identical rewards of whatever quality, but rejected unequal outcomes with such vehemence that there could be little doubt about their feelings. I often show their reactions to audiences, who almost fall out of their chairs laughing—which I interpret as a sign of surprised recognition. (232)

What the capuchins do when they see someone else getting a better reward is throw the measly cucumber back at the experimenter and proceed to rattle the cage in agitation. De Waal compares it to the Occupy Wall Street protests. The poor monkeys clearly recognize the insanity of the human they’re working for.

            There’s still a long way to travel, however, from helpful rats and protesting capuchins before you get to human morality. But that gap continues to shrink as researchers find new ways to explore the social behaviors of the primates that are even more closely related to us. Chimps, for instance, have been seen taking inequity aversion an important step beyond what monkeys display. Not only will certain individuals refuse to work for lesser rewards; they’ll refuse to work even for the superior rewards if they see their companions aren’t being paid equally. De Waal does acknowledge though that there still remains an important step between these behaviors and human morality. “I am reluctant to call a chimpanzee a ‘moral being,’” he writes.

This is because sentiments do not suffice. We strive for a logically coherent system and have debates about how the death penalty fits arguments for the sanctity of life, or whether an unchosen sexual orientation can be morally wrong. These debates are uniquely human. There is little evidence that other animals judge the appropriateness of actions that do not directly affect themselves. (17-8)

Moral intuitions can often inspire some behaviors that to people in modern liberal societies seem appallingly immoral. De Waal quotes anthropologist Christopher Boehm on the “special, pejorative moral ‘discount’ applied to cultural strangers—who often are not even considered fully human,” and he goes on to explain that “The more we expand morality’s reach, the more we need to rely on our intellect.” But the intellectual principles must be grounded in the instincts and emotions we evolved as social primates; this is what he means by bottom-up morality or “naturalized ethics” (235).

*****

            In locating the foundations of morality in our evolved emotions—propensities we share with primates and even rats—de Waal seems to be taking a firm stand against any need for religion. But he insists throughout the book that this isn’t the case. And, while the idea that people are quite capable of playing fair and treating each other with compassion without any supernatural policing may seem to land him squarely in the same camp as prominent atheists like Richard Dawkins and Christopher Hitchens, whom he calls “neo-atheists,” he contends that they’re just as, if not more, misguided than the people of faith who believe the rules must be handed down from heaven. “Even though Dawkins cautioned against his own anthropomorphism of the gene,” de Waal wrote all the way back in his 1996 book Good Natured: The Origins of Right and Wrong in Humans and Other Animals, “with the passage of time, carriers of selfish genes became selfish by association” (14). Thus de Waal tries to find some middle ground between religious dogmatists on one side and those who are equally dogmatic in their opposition to religion and equally mistaken in their espousal of veneer theory on the other. “I consider dogmatism a far greater threat than religion per se,” he writes in The Bonobo and the Atheist.

I am particularly curious why anyone would drop religion while retaining the blinkers sometimes associated with it. Why are the “neo-atheists” of today so obsessed with God’s nonexistence that they go on media rampages, wear T-shirts proclaiming their absence of belief, or call for a militant atheism? What does atheism have to offer that’s worth fighting for? (84)

For de Waal, neo-atheism is an empty placeholder of a philosophy, defined not by any positive belief but merely by an obstinately negative attitude toward religion. It’s hard to tell early on in his book if this view is based on any actual familiarity with the books whose titles—The God Delusion, god is not Great—he takes issue with. What is obvious, though, is that he’s trying to appeal to some spirit of moderation so that he might reach an audience who may have already been turned off by the stridency of the debates over religion’s role in society. At any rate, we can be pretty sure that Hitchens, for one, would have had something to say about de Waal’s characterization.

De Waal’s expertise as a primatologist gave him what was in many ways an ideal perspective on the selfish gene debates, as well as on sociobiology more generally, much the way Sarah Blaffer Hrdy’s expertise has done for her. The monkeys and apes de Waal works with are a far cry from the ants and wasps that originally inspired the gene-centered approach to explaining behavior. “There are the bees dying for their hive,” he writes in The Bonobo and the Atheist,

and the millions of slime mold cells that build a single, sluglike organism that permits a few among them to reproduce. This kind of sacrifice was put on the same level as the man jumping into an icy river to rescue a stranger or the chimpanzee sharing food with a whining orphan. From an evolutionary perspective, both kinds of helping are comparable, but psychologically speaking they are radically different. (33)

At the same time, though, de Waal gets to see up close almost every day how similar we are to our evolutionary cousins, and the continuities leave no question as to the wrongheadedness of blank slate ideas about socialization. “The road between genes and behavior is far from straight,” he writes, sounding a note similar to that of the late Stephen Jay Gould, “and the psychology that produces altruism deserves as much attention as the genes themselves.” He goes on to explain,

Mammals have what I call an “altruistic impulse” in that they respond to signs of distress in others and feel an urge to improve their situation. To recognize the need of others, and react appropriately, is really not the same as a preprogrammed tendency to sacrifice oneself for the genetic good. (33)

We can’t discount the role of biology, in other words, but we must keep in mind that genes are at the distant end of a long chain of cause and effect that has countless other inputs before it links to emotion and behavior. De Waal angered both the social constructivists and quite a few of the gene-centered evolutionists, but by now the balanced view his work as primatologist helped him to arrive at has, for the most part, won the day. Now, in his other role as a scientist who studies the evolution of morality, he wants to strike a similar balance between extremists on both sides of the religious divide. Unfortunately, in this new arena, his perspective isn’t anywhere near as well informed.

             The type of religion de Waal points to as evidence that the neo-atheists’ concerns are misguided and excessive is definitely moderate. It’s not even based on any actual beliefs, just some nice ideas and stories adherents enjoy hearing and thinking about in a spirit of play. We have to wonder, though, just how prevalent this New Age, Life-of-Pi type of religion really is. I suspect the passages in The Bonobo and the Atheist discussing it are going to be equally offensive to atheists and people of actual faith alike. Here’s one  example of the bizarre way he writes about religion:

Neo-atheists are like people standing outside a movie theater telling us that Leonardo DiCaprio didn’t really go down with the Titanic. How shocking! Most of us are perfectly comfortable with the duality. Humor relies on it, too, lulling us into one way of looking at a situation only to hit us over the head with another. To enrich reality is one of the most delightful capacities we have, from pretend play in childhood to visions of an afterlife when we grow older. (294)

He seems to be suggesting that the religious know, on some level, their beliefs aren’t true. “Some realities exist,” he writes, “some we just like to believe in” (294). The problem is that while many readers may enjoy the innuendo about humorless and inveterately over-literal atheists, most believers aren’t joking around—even the non-extremists are more serious than de Waal seems to think.

            As someone who’s been reading de Waal’s books for the past seventeen years, someone who wanted to strangle Ian Parker after reading his cheap smear piece in The New Yorker, someone who has admired the great primatologist since my days as an undergrad anthropology student, I experienced the sections of The Bonobo and the Atheist devoted to criticisms of neo-atheism, which make up roughly a quarter of this short book, as soul-crushingly disappointing. And I’ve agonized over how to write this part of the review. The middle path de Waal carves out is between a watered-down religion believers don’t really believe on one side and an egregious postmodern caricature of Sam Harris’s and Christopher Hitchens’s positions on the other. He focuses on Harris because of his book, The Moral Landscape, which explores how we might use science to determine our morals and values instead of religion, but he gives every indication of never having actually read the book and of instead basing his criticisms solely on the book’s reputation among Harris’s most hysterical detractors. And he targets Hitchens because he thinks he has the psychological key to understanding what he refers to as his “serial dogmatism.” But de Waal’s case is so flimsy a freshman journalism student could demolish it with no more than about ten minutes of internet fact-checking.

De Waal does acknowledge that we should be skeptical of “religious institutions and their ‘primates’,” but he wonders “what good could possibly come from insulting the many people who find value in religion?” (19). This is the tightrope he tries to walk throughout his book. His focus on the purely negative aspect of atheism juxtaposed with his strange conception of the role of belief seems designed to give readers the impression that if the atheists succeed society might actually suffer severe damage. He writes,

Religion is much more than belief. The question is not so much whether religion is true or false, but how it shapes our lives, and what might possibly take its place if we were to get rid of it the way an Aztec priest rips the beating heart out of a virgin. What could fill the gaping hole and take over the removed organ’s functions? (216)

The first problem is that many people who call themselves humanists, as de Waal does, might suggest that there are in fact many things that could fill the gap—science, literature, philosophy, music, cinema, human rights activism, just to name a few. But the second problem is that the militancy of the militant atheists is purely and avowedly rhetorical. In a debate with Hitchens, former British Prime Minister Tony Blair once held up the same straw man that de Waal drags through the pages of his book, the claim that neo-atheists are trying to extirpate religion from society entirely, to which Hitchens replied, “In fairness, no one was arguing that religion should or will die out of the world. All I’m arguing is that it would be better if there was a great deal more by way of an outbreak of secularism” (20:20). What Hitchens is after is an end to the deference automatically afforded religious ideas by dint of their supposed sacredness; religious ideas need to be critically weighed just like any other ideas—and when they are thus weighed they often don’t fare so well, in either logical or moral terms. It’s hard to understand why de Waal would have a problem with this view.

*****

            De Waal’s position is even more incoherent with regard to Harris’s arguments about the potential for a science of morality, since they represent an attempt to answer, at least in part, the very question of what might take the place of religion in providing guidance in our lives that he poses again and again throughout The Bonobo and the Atheist. De Waal takes issue first with the book’s title, The Moral Landscape: How Science can Determine Human Values. The notion that science might determine any aspect of morality suggests to him a top-down approach as opposed to his favored bottom-up strategy that takes “naturalized ethics” as its touchstone. This is, however, unbeknownst to de Waal, a mischaracterization of Harris’s thesis. Rather than engage Harris’s arguments in any direct or meaningful way, de Waal contents himself with following in the footsteps of critics who apply the postmodern strategy of holding the book to account for all the analogies that can be drawn with it, no matter how tenuously or tendentiously, to historical evils. De Waal writes, for instance,

While I do welcome a science of morality—my own work is part of it—I can’t fathom calls for science to determine human values (as per the subtitle of Sam Harris’s The Moral Landscape). Is pseudoscience something of the past? Are modern scientists free from moral biases? Think of the Tuskegee syphilis study just a few decades ago, or the ongoing involvement of medical doctors in prisoner torture at Guantanamo Bay. I am profoundly skeptical of the moral purity of science, and feel that its role should never exceed that of morality’s handmaiden. (22)

(Great phrase that "morality's handmaiden.") But Harris never argues that scientists are any more morally pure than anyone else. His argument is for the application of that “science of morality,” which de Waal proudly contributes to, to attempts at addressing the big moral issues our society faces.

            The guilt-by-association and guilt-by-historical-analogy tactics on display in The Bonobo and the Atheist extend all the way to that lodestar of postmodernism’s hysterical obsessions. We might hope that de Waal, after witnessing the frenzied insanity of the sociobiology controversy from the front row, would know better. But he doesn’t seem to grasp how toxic this type of rhetoric is to reasoned discourse and honest inquiry. After expressing his bafflement at how science and a naturalistic worldview could inspire good the way religion does (even though his main argument is that such external inspiration to do good is unnecessary), he writes,

It took Adolf Hitler and his henchmen to expose the moral bankruptcy of these ideas. The inevitable result was a precipitous drop of faith in science, especially biology. In the 1970s, biologists were still commonly equated with fascists, such as during the heated protest against “sociobiology.” As a biologist myself, I am glad those acrimonious days are over, but at the same time I wonder how anyone could forget this past and hail science as our moral savior. How did we move from deep distrust to naïve optimism? (22)

Was Nazism borne of an attempt to apply science to moral questions? It’s true some people use science in evil ways, but not nearly as commonly as people are directly urged by religion to perpetrate evils like inquisitions or holy wars. When science has directly inspired evil, as in the case of eugenics, the lifespan of the mistake was measurable in years or decades rather than centuries or millennia. Not to minimize the real human costs, but science wins hands down by being self-correcting and, certain individual scientists notwithstanding, undogmatic.

Harris intended for his book to begin a debate he was prepared to actively participate in. But he quickly ran into the problem that postmodern criticisms can’t really be dealt with in any meaningful way. The following long quote from Harris’s response to his battier critics in the Huffington Post will show both that de Waal’s characterization of his argument is way off-the-mark, and that it is suspiciously unoriginal:

How, for instance, should I respond to the novelist Marilynne Robinson’s paranoid, anti-science gabbling in the Wall Street Journal where she consigns me to the company of the lobotomists of the mid 20th century? Better not to try, I think—beyond observing how difficult it can be to know whether a task is above or beneath you. What about the science writer John Horgan, who was kind enough to review my book twice, once in Scientific American where he tarred me with the infamous Tuskegee syphilis experiments, the abuse of the mentally ill, and eugenics, and once in The Globe and Mail, where he added Nazism and Marxism for good measure? How does one graciously respond to non sequiturs? The purpose of The Moral Landscape is to argue that we can, in principle, think about moral truth in the context of science. Robinson and Horgan seem to imagine that the mere existence of the Nazi doctors counts against my thesis. Is it really so difficult to distinguish between a science of morality and the morality of science? To assert that moral truths exist, and can be scientifically understood, is not to say that all (or any) scientists currently understand these truths or that those who do will necessarily conform to them.

And we have to ask further what alternative source of ethical principles do the self-righteous grandstanders like Robinson and Horgan—and now de Waal—have to offer? In their eagerness to compare everyone to the Nazis, they seem to be deriving their own morality from Fox News.

De Waal makes three objections to Harris’s arguments that are of actual substance, but none of them are anywhere near as devastating to his overall case as de Waal makes out. First, Harris begins with the assumption that moral behaviors lead to “human flourishing,” but this is a presupposed value as opposed to an empirical finding of science—or so de Waal claims. But here’s de Waal himself on a level of morality sometimes seen in apes that transcends one-on-one interactions between individuals:

female chimpanzees have been seen to drag reluctant males toward each other to make up after a fight, while removing weapons from their hands. Moreover, high-ranking males regularly act as impartial arbiters to settle disputes in the community. I take these hints of community concern as a sign that the building blocks of morality are older than humanity, and that we don’t need God to explain how we got to where we are today. (20)

The similarity between the concepts of human flourishing and community concern highlights one of the main areas of confusion de Waal could have avoided by actually reading Harris’s book. The word “determine” in the title has two possible meanings. Science can determine values in the sense that it can guide us toward behaviors that will bring about flourishing. But it can also determine our values in the sense of discovering what we already naturally value and hence what conditions need to be met for us to flourish.

De Waal performs a sleight of hand late in The Bonobo and the Atheist, substituting another “utilitarian” for Harris, justifying the trick by pointing out that utilitarians also seek to maximize human flourishing—though Harris never claims to be one. This leads de Waal to object that strict utilitarianism isn’t viable because he’s more likely to direct his resources to his own ailing mother than to any stranger in need, even if those resources would benefit the stranger more. Thus de Waal faults Harris’s ethics for overlooking the role of loyalty in human lives. His third criticism is similar; he worries that utilitarians might infringe on the rights of a minority to maximize flourishing for a majority. But how, given what we know about human nature, could we expect humans to flourish—to feel as though they were flourishing—in a society that didn’t properly honor friendship and the bonds of family? How could humans be happy in a society where they had to constantly fear being sacrificed to the whim of the majority? It is in precisely this effort to discover—or determine—under which circumstances humans flourish that Harris believes science can be of the most help. And as de Waal moves up from his mammalian foundations of morality to more abstract ethical principles the separation between his approach and Harris’s starts to look suspiciously like a distinction without a difference.

            Harris in fact points out that honoring family bonds probably leads to greater well-being on pages seventy-three and seventy-four of The Moral Landscape, and de Waal quotes from page seventy-four himself to chastise Harris for concentrating too much on "the especially low-hanging fruit of conservative Islam" (74). The incoherence of de Waal's argument (and the carelessness of his research) are on full display here as he first responds to a point about the genital mutilation of young girls by asking, "Isn't genital mutilation common in the United States, too, where newborn males are circumcised without their consent?" (90). So cutting off the foreskin of a male's penis is morally equivalent to cutting off a girl's clitoris? Supposedly, the equivalence implies that there can't be any reliable way to determine the relative moral status of religious practices. "Could it be that religion and culture interact to the point that there is no universal morality?" Perhaps, but, personally, as a circumcised male, I think this argument is a real howler.

*****

The slick scholarly laziness on display in The Bonobo and the Atheist is just as bad when it comes to the positions, and the personality, of Christopher Hitchens, whom de Waal sees fit to psychoanalyze instead of engaging his arguments in any substantive way—but whose memoir, Hitch-22, he’s clearly never bothered to read. The straw man about the neo-atheists being bent on obliterating religion entirely is, disappointingly, but not surprisingly by this point, just one of several errors and misrepresentations. De Waal’s main argument against Hitchens, that his atheism is just another dogma, just as much a religion as any other, is taken right from the list of standard talking points the most incurious of religious apologists like to recite against him. Theorizing that “activist atheism reflects trauma” (87)—by which he means that people raised under severe religions will grow up to espouse severe ideologies of one form or another—de Waal goes on to suggest that neo-atheism is an outgrowth of “serial dogmatism”:

Hitchens was outraged by the dogmatism of religion, yet he himself had moved from Marxism (he was a Trotskyist) to Greek Orthodox Christianity, then to American Neo-Conservatism, followed by an “antitheist” stance that blamed all of the world’s troubles on religion. Hitchens thus swung from the left to the right, from anti-Vietnam War to cheerleader of the Iraq War, and from pro to contra God. He ended up favoring Dick Cheney over Mother Teresa. (89)

This is truly awful rubbish, and it’s really too bad Hitchens isn’t around anymore to take de Waal to task for it himself. First, this passage allows us to catch out de Waal’s abuse of the term dogma; dogmatism is rigid adherence to beliefs that aren’t open to questioning. The test of dogmatism is whether you’re willing to adjust your views in light of new evidence or changing circumstances—it has nothing to do with how willing or eager you are to debate. What de Waal is labeling dogmatism is what we normally call outspokenness. Second, his facts are simply wrong. For one, though Hitchens was labeled a neocon by some of his fellows on the left simply because he supported the invasion of Iraq, he never considered himself one. When he was asked in an interview for the New Stateman if he was a neoconservative, he responded unequivocally, “I’m not a conservative of any kind.” Finally, can’t someone be for one war and against another, or agree with certain aspects of a religious or political leader’s policies and not others, without being shiftily dogmatic?

            De Waal never really goes into much detail about what the “naturalized ethics” he advocates might look like beyond insisting that we should take a bottom-up approach to arriving at them. This evasiveness gives him space to criticize other nonbelievers regardless of how closely their ideas might resemble his own. “Convictions never follow straight from evidence or logic,” he writes. “Convictions reach us through the prism of human interpretation” (109). He takes this somewhat banal observation (but do they really never follow straight from evidence?) as a license to dismiss the arguments of others based on silly psychologizing. “In the same way that firefighters are sometimes stealth arsonists,” he writes, “and homophobes closet homosexuals, do some atheists secretly long for the certitude of religion?” (88). We could of course just as easily turn this Freudian rhetorical trap back against de Waal and his own convictions. Is he a closet dogmatist himself? Does he secretly hold the unconscious conviction that primates are really nothing like humans and that his research is all a big sham?

            Christopher Hitchens was another real-life character whose personality shone through his writing, and like Yossarian in Joseph Heller’s Catch-22 he often found himself in a position where he knew being sane would put him at odds with the masses, thus convincing everyone of his insanity. Hitchens particularly identified with the exchange near the end of Heller’s novel in which an officer, Major Danby, says, “But, Yossarian, suppose everyone felt that way,” to which Yossarian replies, “Then I’d certainly be a damned fool to feel any other way, wouldn’t I?” (446). (The title for his memoir came from a word game he and several of his literary friends played with book titles.) It greatly saddens me to see de Waal pitting himself against such a ham-fisted caricature of a man in whom, had he taken the time to actually explore his writings, he would likely have found much to admire. Why did Hitch become such a strong advocate for atheism? He made no secret of his motivations. And de Waal, who faults Harris (wrongly) for leaving loyalty out of his moral equations, just might identify with them. It began when the theocratic dictator of Iran put a hit out on his friend, the author Salman Rushdie, because he thought one of his books was blasphemous. Hitchens writes in Hitch-22,

When the Washington Post telephoned me at home on Valentine’s Day 1989 to ask my opinion about the Ayatollah Khomeini’s fatwah, I felt at once that here was something that completely committed me. It was, if I can phrase it like this, a matter of everything I hated versus everything I loved. In the hate column: dictatorship, religion, stupidity, demagogy, censorship, bullying, and intimidation. In the love column: literature, irony, humor, the individual, and the defense of free expression. Plus, of course, friendship—though I like to think that my reaction would have been the same if I hadn’t known Salman at all. (268)

Suddenly, neo-atheism doesn’t seem like an empty place-holder anymore. To criticize atheists so harshly for having convictions that are too strong, de Waal has to ignore all the societal and global issues religion is on the wrong side of. But when we consider the arguments on each side of the abortion or gay marriage or capital punishment or science education debates it’s easy to see that neo-atheists are only against religion because they feel it runs counter to the positive values of skeptical inquiry, egalitarian discourse, free society, and the ascendency of reason and evidence.

            De Waal ends The Bonobo and the Atheist with a really corny section in which he imagines how a bonobo would lecture atheists about morality and the proper stance toward religion. “Tolerance of religion,” the bonobo says, “even if religion is not always tolerant in return, allows humanism to focus on what is most important, which is to build a better society based on natural human abilities” (237). Hitchens is of course no longer around to respond to the bonobo, but many of the same issues came up in his debate with Tony Blair (I hope no one reads this as an insult to the former PM), who at one point also argued that religion might be useful in building better societies—look at all the charity work they do for instance. Hitch, already showing signs of physical deterioration from the treatment for the esophageal cancer that would eventually kill him, responds,

The cure for poverty has a name in fact. It’s called the empowerment of women. If you give women some control over the rate at which they reproduce, if you give them some say, take them off the animal cycle of reproduction to which nature and some doctrine, religious doctrine, condemns them, and then if you’ll throw in a handful of seeds perhaps and some credit, the flaw, the flaw of everything in that village, not just poverty, but education, health, and optimism, will increase. It doesn’t matter—try it in Bangladesh, try it in Bolivia. It works. It works all the time. Name me one religion that stands for that—or ever has. Wherever you look in the world and you try to remove the shackles of ignorance and disease and stupidity from women, it is invariably the clerisy that stands in the way. (23:05)

            Later in the debate, Hitch goes on to argue in a way that sounds suspiciously like an echo of de Waal’s challenges to veneer theory and his advocacy for bottom-up morality. He says,

The injunction not to do unto others what would be repulsive if done to yourself is found in the Analects of Confucius if you want to date it—but actually it’s found in the heart of every person in this room. Everybody knows that much. We don’t require divine permission to know right from wrong. We don’t need tablets administered to us ten at a time in tablet form, on pain of death, to be able to have a moral argument. No, we have the reasoning and the moral suasion of Socrates and of our own abilities. We don’t need dictatorship to give us right from wrong. (25:43)

And as a last word in his case and mine I’ll quote this very de Waalian line from Hitch: “There’s actually a sense of pleasure to be had in helping your fellow creature. I think that should be enough” (35:42).

Also read:

TED MCCORMICK ON STEVEN PINKER AND THE POLITICS OF RATIONALITY

And: 

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

And:

THE ENLIGHTENED HYPOCRISY OF JONATHAN HAIDT'S RIGHTEOUS MIND

Read More
Dennis Junk Dennis Junk

The Feminist Sociobiologist: An Appreciation of Sarah Blaffer Hrdy Disguised as a Review of “Mothers and Others: The Evolutionary Origins of Mutual Understanding”

Sarah Blaffer Hrdy’s book “Mother Nature” was one of the first things I ever read about evolutionary psychology. With her new book, “Mothers and Others,” Hrdy lays out a theory for why humans are so cooperative compared to their ape cousins. Once again, she’s managed to pen a work that will stand the test of time, rewarding multiple readings well into the future.

One way to think of the job of anthropologists studying human evolution is to divide it into two basic components: the first is to arrive at a comprehensive and precise catalogue of the features and behaviors that make humans different from the species most closely related to us, and the second is to arrange all these differences in order of their emergence in our ancestral line. Knowing what came first is essential—though not sufficient—to the task of distinguishing between causes and effects. For instance, humans have brains that are significantly larger than those of any other primate, and we use these brains to fashion tools that are far more elaborate than the stones, sticks, leaves, and sponges used by other apes. Humans are also the only living ape that routinely walks upright on two legs. Since most of us probably give pride of place in the hierarchy of our species’ idiosyncrasies to our intelligence, we can sympathize with early Darwinian thinkers who felt sure brain expansion must have been what started our ancestors down their unique trajectory, making possible the development of increasingly complex tools, which in turn made having our hands liberated from locomotion duty ever more advantageous.

This hypothetical sequence, however, was dashed rather dramatically with the discovery in 1974 of Lucy, the 3.2 million-year-old skeleton of an Australopithecus Afarensis, in Ethiopia. Lucy resembles a chimpanzee in most respects, including cranial capacity, except that her bones have all the hallmarks of a creature with a bipedal gait. Anthropologists like to joke that Lucy proved butts were more important to our evolution than brains. But, though intelligence wasn’t the first of our distinctive traits to evolve, most scientists still believe it was the deciding factor behind our current dominance. At least for now, humans go into the jungle and build zoos and research facilities to study apes, not the other way around. Other apes certainly can’t compete with humans in terms of sheer numbers. Still, intelligence is a catch-all term. We must ask what exactly it is that our bigger brains can do better than those of our phylogenetic cousins.

A couple decades ago, that key capacity was thought to be language, which makes symbolic thought possible. Or is it symbolic thought that makes language possible? Either way, though a handful of ape prodigies have amassed some high vocabulary scores in labs where they’ve been taught to use pictographs or sign language, human three-year-olds accomplish similar feats as a routine part of their development. As primatologist and sociobiologist (one of the few who unabashedly uses that term for her field) Sarah Blaffer Hrdy explains in her 2009 book Mothers and Others: The Evolutionary Origins of Mutual Understanding, human language relies on abilities and interests aside from a mere reporting on the state of the outside world, beyond simply matching objects or actions with symbolic labels. Honeybees signal the location of food with their dances, vervet monkeys have distinct signals for attacks by flying versus ground-approaching predators, and the list goes on. Where humans excel when it comes to language is not just in the realm of versatility, but also in our desire to bond through these communicative efforts. Hrdy writes,

The open-ended qualities of language go beyond signaling. The impetus for language has to do with wanting to “tell” someone else what is on our minds and learn what is on theirs. The desire to psychologically connect with others had to evolve before language. (38)

The question Hrdy attempts to answer in Mothers and Others—the difference between humans and other apes she wants to place within a theoretical sequence of evolutionary developments—is how we evolved to be so docile, tolerant, and nice as to be able to cram ourselves by the dozens into tight spaces like airplanes without conflict. “I cannot help wondering,” she recalls having thought in a plane preparing for flight,

what would happen if my fellow human passengers suddenly morphed into another species of ape. What if I were traveling with a planeload of chimpanzees? Any of us would be lucky to disembark with all ten fingers and toes still attached, with the baby still breathing and unmaimed. Bloody earlobes and other appendages would litter the aisles. Compressing so many highly impulsive strangers into a tight space would be a recipe for mayhem. (3)

Over the past decade, the human capacity for cooperation, and even for altruism, has been at the center of evolutionary theorizing. Some clever experiments in the field of economic game theory have revealed several scenarios in which humans can be counted on to act against their own interest. What survival and reproductive advantages could possibly accrue to creatures given to acting for the benefit of others?

When it comes to economic exchanges, of course, human thinking isn’t tied to the here-and-now the way the thinking of other animals tends to be. To explain why humans might, say, forgo a small payment in exchange for the opportunity to punish a trading partner for withholding a larger, fairer payment, many behavioral scientists point out that humans seldom think in terms of one-off deals. Any human living in a society of other humans needs to protect his or her reputation for not being someone who abides cheating. Experimental settings are well and good, but throughout human evolutionary history individuals could never have been sure they wouldn’t encounter exchange partners a second or third time in the future. It so happens that one of the dominant theories to explain ape intelligence relies on the need for individuals within somewhat stable societies to track who owes whom favors, who is subordinate to whom, and who can successfully deceive whom. This “Machiavellian intelligence” hypothesis explains the cleverness of humans and other apes as the outcome of countless generations vying for status and reproductive opportunities in intensely competitive social groups.

One of the difficulties in trying to account for the evolution of intelligence is that its advantages seem like such a no-brainer. Isn’t it always better to be smarter? But, as Hrdy points out, the Machiavellian intelligence hypothesis runs into a serious problem. Social competition may have been an important factor in making primates brainer than other mammals, but it can’t explain why humans are brainer than other apes. She writes,

We still have to explain why humans are so much better than chimpanzees at conceptualizing what others are thinking, why we are born innately eager to interpret their motives, feelings, and intentions as well as care about their affective states and moods—in short, why humans are so well equipped for mutual understanding. Chimpanzees, after all, are at least as socially competitive as humans are. (46)

To bolster this point, Hrdy cites research showing that infant chimps have some dazzling social abilities once thought to belong solely to humans. In 1977, developmental psychologist Andrew Meltzoff published his finding that newborn humans mirror the facial expressions of adults they engage with. It was thought that this tendency in humans relied on some neurological structures unique to our lineage which provided the raw material for the evolution of our incomparable social intelligence. But then in 1996 primatologist Masako Myowa replicated Meltzoff’s findings with infant chimps. This and other research suggests that other apes have probably had much the same raw material for natural selection to act on. Yet, whereas the imitative and empathic skills flourish in maturing humans, they seem to atrophy in apes. Hrdy explains,

Even though other primates are turning out to be far better at reading intentions than primatologists initially realized, early flickerings of empathic interest—what might even be termed tentative quests for intersubjective engagement—fade away instead of developing and intensifying as they do in human children. (58)

So the question of what happened in human evolution to make us so different remains.

*****

Sarah Blaffer Hrdy exemplifies a rare, possibly unique, blend of scientific rigor and humanistic sensitivity—the vision of a great scientist and the fine observation of a novelist (or the vision of a great novelist and fine observation of a scientist). Reading her 1999 book, Mother Nature: A History of Mothers, Infants, and Natural Selection, was a watershed experience for me. In going beyond the realm of the literate into that of the literary while hewing closely to strict epistemic principle, she may surpass the accomplishments of even such great figures as Richard Dawkins and Stephen Jay Gould. In fact, since Mother Nature was one of the books through which I was introduced to sociobiology—more commonly known today as evolutionary psychology—I was a bit baffled at first by much of the criticism leveled against the field by Gould and others who claimed it was founded on overly simplistic premises and often produced theories that were politically reactionary.

The theme to which Hrdy continually returns is the too-frequently overlooked role of women and their struggles in those hypothetical evolutionary sequences anthropologists string together. For inspiration in her battle against facile biological theories whose sole purpose is to provide a cheap rationale for the political status quo, she turned, not to a scientist, but a novelist. The man single-most responsible for the misapplication of Darwin’s theory of natural selection to the justification of human societal hierarchies was the philosopher Herbert Spencer, in whose eyes women were no more than what Hrdy characterizes as “Breeding Machines.” Spencer and his fellow evolutionists in the Victorian age, she explains in Mother Nature,

took for granted that being female forestalled women from evolving “the power of abstract reasoning and that most abstract of emotions, the sentiment of justice.” Predestined to be mothers, women were born to be passive and noncompetitive, intuitive rather than logical. Misinterpretations of the evidence regarding women’s intelligence were cleared up early in the twentieth century. More basic difficulties having to do with this overly narrow definition of female nature were incorporated into Darwinism proper and linger to the present day. (17)

Many women over the generations have been unable to envision a remedy for this bias in biology. Hrdy describes the reaction of a literary giant whose lead many have followed.

For Virginia Woolf, the biases were unforgivable. She rejected science outright. “Science, it would seem, in not sexless; she is a man, a father, and infected too,” Woolf warned back in 1938. Her diagnosis was accepted and passed on from woman to woman. It is still taught today in university courses. Such charges reinforce the alienation many women, especially feminists, feel toward evolutionary theory and fields like sociobiology. (xvii)

But another literary luminary much closer to the advent of evolutionary thinking had a more constructive, and combative, response to short-sighted male biologists. And it is to her that Hrdy looks for inspiration. “I fall in Eliot’s camp,” she writes, “aware of the many sources of bias, but nevertheless impressed by the strength of science as a way of knowing” (xviii). She explains that George Eliot,

whose real name was Mary Ann Evans, recognized that her own experiences, frustrations, and desires did not fit within the narrow stereotypes scientists then prescribed for her sex. “I need not crush myself… within a mould of theory called Nature!” she wrote. Eliot’s primary interest was always human nature as it could be revealed through rational study. Thus she was already reading an advance copy of On the Origin of Species on November 24, 1859, the day Darwin’s book was published. For her, “Science has no sex… the mere knowing and reasoning faculties, if they act correctly, must go through the same process and arrive at the same result.” (xvii)

Eliot’s distaste for Spencer’s idea that women’s bodies were designed to divert resources away from the brain to the womb was as personal as it was intellectual. She had in fact met and quickly fallen in love with Spencer in 1851. She went on to send him a proposal which he rejected on eugenic grounds: “…as far as posterity is concerned,” Hrdy quotes, “a cultivated intelligence based upon a bad physique is of little worth, seeing that its descendants will die out in a generation or two.” Eliot’s retort came in the form of a literary caricature—though Spencer already seems a bit like his own caricature. Hrdy writes,

In her first major novel, Adam Bede (read by Darwin as he relaxed after the exertions of preparing Origin for publication), Eliot put Spencer’s views concerning the diversion of somatic energy into reproduction in the mouth of a pedantic and blatantly misogynist old schoolmaster, Mr. Bartle: “That’s the way with these women—they’ve got no head-pieces to nourish, and so their food all runs either to fat or brats.” (17)

A mother of three and an Emeritus Professor of Anthropology at the University of California, Davis, Hrdy is eloquent on the need for intelligence—and lots of familial and societal support—if one is to balance duties and ambitions like her own. Her first contribution to ethology came when she realized that the infanticide among hanuman langurs, which she’d gone to Mount Abu in Rajasthan, India to study at age 26 for her doctoral thesis, had nothing to do with overpopulation, as many suspected. Instead, the pattern she observed was that whenever an outside male deposed a group’s main breeder he immediately began exterminating all of the prior male’s offspring to induce the females to ovulate and give birth again—this time to the new male’s offspring. This was the selfish gene theory in action. But the females Hrdy was studying had an interesting response to this strategy.

In the early 1970s, it was still widely assumed by Darwinians that females were sexually passive and “coy.” Female langurs were anything but. When bands of roving males approached the troop, females would solicit them or actually leave their troop to go in search of them. On occasion, a female mated with invaders even though she was already pregnant and not ovulating (something else nonhuman primates were not supposed to do). Hence, I speculated that mothers were mating with outside males who might take over her troop one day. By casting wide the web of possible paternity, mothers could increase the prospects of future survival of offspring, since males almost never attack infants carried by females that, in the biblical sense of the word, they have “known.” Males use past relations with the mother as a cue to attack or tolerate her infant. (35)

Hrdy would go on to discover this was just one of myriad strategies primate females use to get their genes into future generations. The days of seeing females as passive vehicles while the males duke it out for evolutionary supremacy were now numbered.

I’ll never forget the Young-Goodman-Brown experience of reading the twelfth chapter of Mother Nature, titled “Unnatural Mothers,” and covering an impressive variety of evidence sources that simply devastates any notion of women as nurturing automatons, evolved for the sole purpose of serving as loving mothers. The verdict researchers arrive at whenever they take an honest look into the practices of women with newborns is that care is contingent. To give just one example, Hrdy cites the history of one of the earliest foundling homes in the world, the “Hospital of the Innocents” in Florence.

Founded in 1419, with assistance from the silk guilds, the Ospedale delgi Innocenti was completed in 1445. Ninety foundlings were left there the first year. By 1539 (a famine year), 961 babies were left. Eventually five thousand infants a year poured in from all corners of Tuscany. (299)

What this means is that a troubling number of new mothers were realizing they couldn't care for their infants. Unfortunately, newborns without direct parental care seldom fare well. “Of 15,000 babies left at the Innocenti between 1755 and 1773,” Hrdy reports, “two thirds died before reaching their first birthday” (299). And there were fifteen other foundling homes in the Grand Duchy of Tuscany at the time.

The chapter amounts to a worldwide tour of infant abandonment, exposure, or killing. (I remember having a nightmare after reading it about being off-balance and unable to set a foot down without stepping on a dead baby.) Researchers studying sudden infant death syndrome in London set up hidden cameras to monitor mothers interacting with babies but ended up videotaping them trying to smother them. Cases like this have made it necessary for psychiatrists to warn doctors studying the phenomenon “that some undeterminable portion of SIDS cases might be infanticides” (292). Why do so many mothers abandon or kill their babies? Turning to the ethnographic data, Hrdy explains,

Unusually detailed information was available for some dozen societies. At a gross level, the answer was obvious. Mothers kill their own infants where other forms of birth control are unavailable. Mothers were unwilling to commit themselves and had no way to delegate care of the unwanted infant to others—kin, strangers, or institutions. History and ecological constraints interact in complex ways to produce different solutions to unwanted births. (296)

Many scholars see the contingent nature of maternal care as evidence that motherhood is nothing but a social construct. Consistent with the blank-slate view of human nature, this theory holds that every aspect of child-rearing, whether pertaining to the roles of mothers or fathers, is determined solely by culture and therefore must be learned. Others, who simply can’t let go of the idea of women as virtuous vessels, suggest that these women, as numerous as they are, must all be deranged.

Hrdy demolishes both the purely social constructivist view and the suggestion of pathology. And her account of the factors that lead women to infanticide goes to the heart of her arguments about the centrality of female intelligence in the history of human evolution. Citing the pioneering work of evolutionary psychologists Martin Daly and MargoWilson, Hrdy writes,

How a mother, particularly a very young mother, treats one infant turns out to be a poor predictor of how she might treat another one born when she is older, or faced with improved circumstances. Even with culture held constant, observing modern Western women all inculcated with more or less the same post-Enlightenment values, maternal age turned out to be a better predictor of how effective a mother would be than specific personality traits or attitudes. Older women describe motherhood as more meaningful, are more likely to sacrifice themselves on behalf of a needy child, and mourn lost pregnancies more than do younger women. (314)

The takeaway is that a woman, to reproduce successfully, must assess her circumstances, including the level of support she can count on from kin, dads, and society. If she lacks the resources or the support necessary to raise the child, she may have to make a hard decision. But making that decision in the present unfavorable circumstances in no way precludes her from making the most of future opportunities to give birth to other children and raise them to reproductive age.

Hrdy goes on to describe an experimental intervention that took place in a hospital located across the street from a foundling home in 17th century France. The Hospice des Enfants Assistes cared for indigent women and assisted them during childbirth. It was the only place where poor women could legally abandon their babies. What the French reformers did was tell a subset of the new mothers that they had to stay with their newborns for eight days after birth.

Under this “experimental” regimen, the proportion of destitute mothers who subsequently abandoned their babies dropped from 24 to 10 percent. Neither cultural concepts about babies nor their economic circumstances had changed. What changed was the degree to which they had become attached to their breast-feeding infants. It was as though their decision to abandon their babies and their attachment to their babies operated as two different systems. (315)

Following the originator of attachment theory, John Bowlby, who set out to integrate psychiatry and developmental psychology into an evolutionary framework, Hrdy points out that the emotions underlying the bond between mothers and infants (and fathers and infants too) are as universal as they are consequential. Indeed, the mothers who are forced to abandon their infants have to be savvy enough to realize they have to do so before these emotions are engaged or they will be unable to go through with the deed.

Female strategy plays a crucial role in reproductive outcomes in several domains beyond the choice of whether or not to care for infants. Women must form bonds with other women for support, procure the protection of men (usually from other men), and lay the groundwork for their children’s own future reproductive success. And that’s just what women have to do before choosing a mate—a task that involves striking a balance between good genes and a high level of devotion—getting pregnant, and bringing the baby to term. The demographic transition that occurs when an agrarian society becomes increasingly industrialized is characterized at first by huge population increases as infant mortality drops but then levels off as women gain more control over their life trajectories. Here again, the choices women tend to make are at odds with Victorian (and modern evangelical) conceptions of their natural proclivities. Hrdy writes,

Since, formerly, status and well-being tended to be correlated with reproductive success, it is not surprising that mothers, especially those in higher social ranks, put the basics first. When confronted with a choice between striving for status and striving for children, mothers gave priority to status and “cultural success” ahead of a desire for many children. (366)

And then of course come all the important tasks and decisions associated with actually raising any children the women eventually do give birth to. One of the basic skill sets women have to master to be successful mothers is making and maintaining friendships; they must be socially savvy because more than with any other ape the support of helpers, what Hrdy calls allomothers, will determine the fate of their offspring.

*****

Mother Nature is a massive work—541pages before the endnotes—exploring motherhood through the lens of sociobiology and attachment theory. Mothers and Others is leaner, coming in at just under 300 pages, because its focus is narrower. Hrdy feels that in attempting to account for humans’ prosocial impulses over the past decade, the role of women and motherhood has once again been scanted. She points to the prevalence of theories focusing on competition between groups, with the edge going to those made up of the most cooperative and cohesive members. Such theories once again give the leading role to males and their conflicts, leaving half the species out of the story—unless that other half’s only role is to tend to the children and forage for food while the “band of brothers” is out heroically securing borders.

Hrdy doesn’t weigh in directly on the growing controversy over whether group selection has operated as a significant force in human evolution. The problem she sees with intertribal warfare as an explanation for human generosity and empathy is that the timing isn’t right. What Hrdy is after are the selection pressures that led to the evolution of what she calls “emotionally modern humans,” the “people preadapted to get along with one another even when crowded together on an airplane” (66). And she argues that humans must have been emotionally modern before they could have further evolved to be cognitively modern. “Brains require care more than caring requires brains” (176). Her point is that bonds of mutual interest and concern came before language and the capacity for runaway inventiveness. Humans, Hrdy maintains, would have had to begin forming these bonds long before the effects of warfare were felt.

Apart from periodic increases in unusually rich locales, most Pleistocene humans lived at low population densities. The emergence of human mind reading and gift-giving almost certainly preceded the geographic spread of a species whose numbers did not begin to really expand until the past 70,000 years. With increasing population density (made possible, I would argue, because they were already good at cooperating), growing pressure on resources, and social stratification, there is little doubt that groups with greater internal cohesion would prevail over less cooperative groups. But what was the initial payoff? How could hypersocial apes evolve in the first place? (29)

In other words, what was it that took inborn capacities like mirroring an adult’s facial expressions, present in both human and chimp infants, and through generations of natural selection developed them into the intersubjective tendencies displayed by humans today?

Like so many other anthropologists before her, Hrdy begins her attempt to answer this question by pointing to a trait present in humans but absent in our fellow apes. “Under natural conditions,” she writes, “an orangutan, chimpanzee, or gorilla baby nurses for four to seven years and at the outset is inseparable from his mother, remaining in intimate front-to-front contact 100 percent of the day and night” (68). But humans allow others to participate in the care of their babies almost immediately after giving birth to them. Who besides Sarah Blaffer Hrdy would have noticed this difference, or given it more than a passing thought? (Actually, there are quite a few candidates among anthropologists—Kristen Hawkes for instance.) Ape mothers remain in constant contact with their infants, whereas human mothers often hand over their babies to other women to hold as soon as they emerge from the womb. The difference goes far beyond physical contact. Humans are what Hrdy calls “cooperative breeders,” meaning a child will in effect have several parents aside from the primary one. Help from alloparents opens the way for an increasingly lengthy development, which is important because the more complex the trait—and human social intelligence is about as complex as they come—the longer it takes to develop in maturing individuals. Hrdy writes,

One widely accepted tenet of life history theory is that, across species, those with bigger babies relative to the mother’s body size will also tend to exhibit longer intervals between births because the more babies cost the mother to produce, the longer she will need to recoup before reproducing again. Yet humans—like marmosets—provide a paradoxical exception to this rule. Humans, who of all the apes produce the largest, slowest-maturing, and most costly babies, also breed the fastest. (101)

Those marmosets turn out to be central to Hrdy’s argument because, along with their cousins in the family Callitrichidae, the tamarins, they make up almost the totality of the primate species whom she classifies as “full-fledged cooperative breeders” (92). This and other similarities between humans and marmosets and tamarins have long been overlooked because anthropologists have understandably been focused on the great apes, as well as other common research subjects like baboons and macaques.

Golden Lion Tamarins, by Sarah Landry

Callitrichidae, it so happens, engage in some uncannily human-like behaviors. Plenty of primate babies wail and shriek when they’re in distress, but infants who are frequently not in direct contact with their mothers would have to find a way to engage with them, as well as other potential caregivers, even when they aren’t in any trouble. “The repetitive, rhythmical vocalizations known as babbling,” Hrdy points out, “provided a particularly elaborate way to accomplish this” (122). But humans aren’t the only primates that babble “if by babble we mean repetitive strings of adultlike vocalizations uttered without vocal referents”; marmosets and tamarins do it too. Some of the other human-like patterns aren’t as cute though. Hrdy writes,

Shared care and provisioning clearly enhances maternal reproductive success, but there is also a dark side to such dependence. Not only are dominant females (especially pregnant ones) highly infanticidal, eliminating babies produced by competing breeders, but tamarin mothers short on help may abandon their own young, bailing out at birth by failing to pick up neonates when they fall to the ground or forcing clinging newborns off their bodies, sometimes even chewing on their hands or feet. (99)

It seems that the more cooperative infant care tends to be for a given species the more conditional it is—the more likely it will be refused when the necessary support of others can’t be counted on.

Hrdy’s cooperative breeding hypothesis is an outgrowth of George Williams and Kristen Hawkes’s so-called “Grandmother Hypothesis.” For Hawkes, the important difference between humans and apes is that human females go on living for decades after menopause, whereas very few female apes—or any other mammals for that matter—live past their reproductive years. Hawkes hypothesized that the help of grandmothers made it possible for ever longer periods of dependent development for children, which in turn made it possible for the incomparable social intelligence of humans to evolve. Until recently, though, this theory had been unconvincing to anthropologists because a renowned compendium of data compiled by George Peter Murdock in his Ethnographic Atlas revealed that there was a strong trend toward patrilocal residence patterns in all the societies that had been studied. Since grandmothers are thought to be much more likely to help care for their daughters’ children than their sons’—owing to paternity uncertainty—the fact that most humans raise their children far from maternal grandmothers made any evolutionary role for them unlikely.

But then in 2004 anthropologist Helen Alvarez reexamined Murdock’s analysis of residence patterns and concluded that pronouncements about widespread patrilocality were based on a great deal of guesswork. After eliminating societies for which too little evidence existed to determine the nature of their residence practices, Alvarez calculated that the majority of the remaining societies were bilocal, which means couples move back and forth between the mother’s and the father’s groups. Citing “The Alvarez Corrective” and other evidence, Hrdy concludes,

Instead of some highly conserved tendency, the cross-cultural prevalence of patrilocal residence patterns looks less like an evolved human universal than a more recent adaptation to post-Pleistocene conditions, as hunters moved into northern climes where women could no longer gather wild plants year-round or as groups settled into circumscribed areas. (246)

But Hrdy extends the cast of alloparents to include a mother’s preadult daughters, as well as fathers and their extended families, although the male contribution is highly variable across cultures (and variable too of course among individual men).

With the observation that human infants rely on multiple caregivers throughout development, Hrdy suggests the mystery of why selection favored the retention and elaboration of mind reading skills in humans but not in other apes can be solved by considering the life-and-death stakes for human babies trying to understand the intentions of mothers and others. She writes,

Babies passed around in this way would need to exercise a different skill set in order to monitor their mothers’ whereabouts. As part of the normal activity of maintaining contact both with their mothers and with sympathetic alloparents, they would find themselves looking for faces, staring at them, and trying to read what they reveal. (121)

Mothers, of course, would also have to be able to read the intentions of others whom they might consider handing their babies over to. So the selection pressure occurs on both sides of the generational divide. And now that she’s proposed her candidate for the single most pivotal transition in human evolution Hrdy’s next task is to place it in a sequence of other important evolutionary developments.

Without a doubt, highly complex coevolutionary processes were involved in the evolution of extended lifespans, prolonged childhoods, and bigger brains. What I want to stress here, however, is that cooperative breeding was the pre-existing condition that permitted the evolution of these traits in the hominin line. Creatures may not need big brains to evolve cooperative breeding, but hominins needed shared care and provisioning to evolve big brains. Cooperative breeding had to come first. (277)

*****

Flipping through Mother Nature, a book I first read over ten years ago, I can feel some of the excitement I must have experienced as a young student of behavioral science, having graduated from the pseudoscience of Freud and Jung to the more disciplined—and in its way far more compelling—efforts of John Bowlby, on a path, I was sure, to becoming a novelist, and now setting off into this newly emerging field with the help of a great scientist who saw the value of incorporating literature and art into her arguments, not merely as incidental illustrations retrofitted to recently proposed principles, but as sources of data in their own right, and even as inspiration potentially lighting the way to future discovery. To perceive, to comprehend, we must first imagine. And stretching the mind to dimensions never before imagined is what art is all about.

Yet there is an inescapable drawback to massive books like Mother Nature—for writers and readers alike—which is that any effort to grasp and convey such a massive array of findings and theories comes with the risk of casual distortion since the minutiae mastered by the experts in any subdiscipline will almost inevitably be heeded insufficiently in the attempt to conscript what appear to be basic points in the service of a broader perspective. Even more discouraging is the assurance that any intricate tapestry woven of myriad empirical threads will inevitably be unraveled by ongoing research. Your tapestry is really a snapshot taken from a distance of a field in flux, and no sooner does the shutter close than the beast continues along the path of its stubbornly unpredictable evolution.

When Mothers and Others was published just four years ago in 2009, for instance, reasoning based on the theory of kin selection led most anthropologists to assume, as Hrdy states, that “forager communities are composed of flexible assemblages of close and more distant blood relations and kin by marriage” (132).

This assumption seems to have been central to the thinking that led to the principal theory she lays out in the book, as she explains that “in foraging contexts the majority of children alloparents provision are likely to be cousins, nephews, and nieces rather than unrelated children” (158). But as theories evolve old assumptions come under new scrutiny, and in an article published in the journal Science in March of 2011 anthropologist Kim Hill and his colleagues report that after analyzing the residence and relationship patterns of 32 modern foraging societies their conclusion is that “most individuals in residential groups are genetically unrelated” (1286). In science, two years can make a big difference. This same study does, however, bolster a different pillar of Hrdy’s argument by demonstrating that men relocate to their wives’ groups as often as women relocate to their husbands’, lending further support to Alvarez’s corrective of Murdock’s data. 

Even if every last piece of evidence she marshals in her case for how pivotal the transition to cooperative breeding was in the evolution of mutual understanding in humans is overturned, Hrdy’s painstaking efforts to develop her theory and lay it out so comprehensively, so compellingly, and so artfully, will not have been wasted. Darwin once wrote that “all observation must be for or against some view to be of any service,” but many scientists, trained as they are to keep their eyes on the data and to avoid the temptation of building grand edifices on foundations of inference and speculation, look askance at colleagues who dare to comment publically on fields outside their specialties, especially in cases like Jared Diamond’s where their efforts end up winning them Pulitzers and guaranteed audiences for their future works.

But what use are legions of researchers with specialized knowledge hermetically partitioned by narrowly focused journals and conferences of experts with homogenous interests? Science is contentious by nature, so whenever a book gains notoriety with a nonscientific audience we can count on groaning from the author’s colleagues as they rush to assure us what we’ve read is a misrepresentation of their field. But stand-alone findings, no matter how numerous, no matter how central they are to researchers’ daily concerns, can’t compete with the grand holistic visions of the Diamonds, Hrdys, or Wilsons, imperfect and provisional as they must be, when it comes to inspiring the next generation of scientists. Nor can any number of correlation coefficients or regression analyses spark anything like the same sense of wonder that comes from even a glimmer of understanding about how a new discovery fits within, and possibly transforms, our conception of life and the universe in which it evolved. The trick, I think, is to read and ponder books like the ones Sarah Blaffer Hrdy writes as soon as they’re published—but to be prepared all the while, as soon as you’re finished reading them, to read and ponder the next one, and the one after that.

Also read:

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

And:

“THE WORLD UNTIL YESTERDAY” AND THE GREAT ANTHROPOLOGY DIVIDE: WADE DAVIS’S AND JAMES C. SCOTT’S BIZARRE AND DISHONEST REVIEWS OF JARED DIAMOND’S WORK

And:

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

Read More
Dennis Junk Dennis Junk

Let's Play Kill Your Brother: Fiction as a Moral Dilemma Game

Anthropologist Jean Briggs discovered one of the keys to Inuit peacekeeping in the style of play adults engage use to engage children. She describes the games in her famous essay, ‘Why Don’t You Kill Your Baby Brother?’ The Dynamics of Peace in Canadian Inuit Camps,” and in so doing, probably unknowingly, lays the groundwork for an understanding of how our love of fiction evolved, along with our moral sensibilities.

            Season 3 of Breaking Bad opens with two expressionless Mexican men in expensive suits stepping out of a Mercedes, taking a look around the peasant village they’ve just arrived in, and then dropping to the ground to crawl on their knees and elbows to a candlelit shrine where they leave an offering to Santa Muerte, along with a crude drawing of the meth cook known as Heisenberg, marking him for execution. We later learn that the two men, Leonel and Marco, who look almost identical, are in fact twins (played by Daniel and Luis Moncada), and that they are the cousins of Tuco Salamanca, a meth dealer and cartel affiliate they believe Heisenberg betrayed and killed. We also learn that they kill people themselves as a matter of course, without registering the slightest emotion and without uttering a word to each other to mark the occasion. An episode later in the season, after we’ve been made amply aware of how coldblooded these men are, begins with a flashback to a time when they were just boys fighting over an action figure as their uncle talks cartel business on the phone nearby. After Marco gets tired of playing keep-away, he tries to provoke Leonel further by pulling off the doll’s head, at which point Leonel runs to his Uncle Hector, crying, “He broke my toy!”

“He’s just having fun,” Hector says, trying to calm him. “You’ll get over it.”

“No! I hate him!” Leonel replies. “I wish he was dead!”

Hector’s expression turns grave. After a moment, he calls Marco over and tells him to reach into the tub of melting ice beside his chair to get him a beer. When the boy leans over the tub, Hector shoves his head into the water and holds it there. “This is what you wanted,” he says to Leonel. “Your brother dead, right?” As the boy frantically pulls on his uncle’s arm trying to free his brother, Hector taunts him: “How much longer do you think he has down there? One minute? Maybe more? Maybe less? You’re going to have to try harder than that if you want to save him.” Leonel starts punching his uncle’s arm but to no avail. Finally, he rears back and punches Hector in the face, prompting him to release Marco and rise from his chair to stand over the two boys, who are now kneeling beside each other. Looking down at them, he says, “Family is all.”

The scene serves several dramatic functions. By showing the ruthless and violent nature of the boys’ upbringing, it intensifies our fear on behalf of Heisenberg, who we know is actually Walter White, a former chemistry teacher and family man from a New Mexico suburb who only turned to crime to make some money for his family before his lung cancer kills him. It also goes some distance toward humanizing the brothers by giving us insight into how they became the mute, mechanical murderers they are when we’re first introduced to them. The bond between the two men and their uncle will be important in upcoming episodes as well. But the most interesting thing about the scene is that it represents in microcosm the single most important moral dilemma of the whole series.

Marco and Leonel are taught to do violence if need be to protect their family. Walter, the show’s central character, gets involved in the meth business for the sake of his own family, and as he continues getting more deeply enmeshed in the world of crime he justifies his decisions at each juncture by saying he’s providing for his wife and kids. But how much violence can really be justified, we’re forced to wonder, with the claim that you’re simply protecting or providing for your family? The entire show we know as Breaking Bad can actually be conceived of as a type of moral exercise like the one Hector puts his nephews through, designed to impart or reinforce a lesson, though the lesson of the show is much more complicated. It may even be the case that our fondness for fictional narratives more generally, like the ones we encounter in novels and movies and TV shows, originated in our need as a species to develop and hone complex social skills involving powerful emotions and difficult cognitive calculations.

Most of us watching Breaking Bad probably feel Hector went way too far with his little lesson, and indeed I’d like to think not too many parents or aunts and uncles would be willing to risk drowning a kid to reinforce the bond between him and his brother. But presenting children with frightening and stressful moral dilemmas to guide them through major lifecycle transitions—weaning, the birth of siblings, adoptions—which tend to arouse severe ambivalence can be an effective way to encourage moral development and instill traditional values. The ethnographer Jean Briggs has found that among the Inuit peoples whose cultures she studies adults frequently engage children in what she calls “playful dramas” (173), which entail hypothetical moral dilemmas that put the children on the hot seat as they struggle to come up with a solution. She writes about these lessons, which strike many outsiders as a cruel form of teasing by the adults, in “‘Why Don’t You Kill Your Baby Brother?’ The Dynamics of Peace in Canadian Inuit Camps,” a chapter she contributed to a 1994 anthology of anthropological essays on peace and conflict. In one example Briggs recounts,

A mother put a strange baby to her breast and said to her own nursling: “Shall I nurse him instead of you?” The mother of the other baby offered her breast to the rejected child and said: “Do you want to nurse from me? Shall I be your mother?” The child shrieked a protest shriek. Both mothers laughed. (176)

This may seem like sadism on the part of the mothers, but it probably functioned to soothe the bitterness arising from the child’s jealousy of a younger nursling. It would also help to settle some of the ambivalence toward the child’s mother, which comes about inevitably as a response to disciplining and other unavoidable frustrations.

Another example Briggs describes seems even more pointlessly sadistic at first glance. A little girl’s aunt takes her hand and puts it on a little boy’s head, saying, “Pull his hair.” The girl doesn’t respond, so her aunt yanks on the boy’s hair herself, making him think the girl had done it. They quickly become embroiled in a “battle royal,” urged on by several adults who find it uproarious. These adults do, however, end up stopping the fight before any serious harm can be done. As horrible as this trick may seem, Briggs believes it serves to instill in the children a strong distaste for fighting because the experience is so unpleasant for them. They also learn “that it is better not to be noticed than to be playfully made the center of attention and laughed at” (177). What became clear to Briggs over time was that the teasing she kept witnessing wasn’t just designed to teach specific lessons but that it was also tailored to the child’s specific stage of development. She writes,

Indeed, since the games were consciously conceived of partly as tests of a child’s ability to cope with his or her situation, the tendency was to focus on a child’s known or expected difficulties. If a child had just acquired a sibling, the game might revolve around the question: “Do you love your new baby sibling? Why don’t you kill him or her?” If it was a new piece of clothing that the child had acquired, the question might be: “Why don’t you die so I can have it?” And if the child had been recently adopted, the question might be: “Who’s your daddy?” (172)

As unpleasant as these tests can be for the children, they never entail any actual danger—Inuit adults would probably agree Hector Salamanca went a bit too far—and they always take place in circumstances and settings where the only threats and anxieties come from the hypothetical, playful dilemmas and conflicts. Briggs explains,

A central idea of Inuit socialization is to “cause thought”: isumaqsayuq. According to [Arlene] Stairs, isumaqsayuq, in North Baffin, characterizes Inuit-style education as opposed to the Western variety. Warm and tender interactions with children help create an atmosphere in which thought can be safely caused, and the questions and dramas are well designed to elicit it. More than that, and as an integral part of thought, the dramas stimulate emotion. (173)

Part of the exercise then seems to be to introduce the children to their own feelings. Prior to having their sibling’s life threatened, the children may not have any idea how they’d feel in the event of that sibling’s death. After the test, however, it becomes much more difficult for them to entertain thoughts of harming their brother or sister—the thought alone will probably be unpleasant.

Briggs also points out that the games send the implicit message to the children that they can be trusted to arrive at the moral solution. Hector knows Leonel won’t let his brother drown—and Leonel learns that his uncle knows this about him. The Inuit adults who tease and tempt children are letting them know they have faith in the children’s ability to resist their selfish or aggressive impulses. Discussing Briggs’s work in his book Moral Origins: The Evolution of Virtue, Altruism, and Shame, anthropologist Christopher Boehm suggests that evolution has endowed children with the social and moral emotions we refer to collectively as consciences, but these inborn moral sentiments need to be activated and shaped through socialization. He writes,

On the one side there will always be our usefully egoistic selfish tendencies, and on the other there will be our altruistic or generous impulses, which also can advance our fitness because altruism and sympathy are valued by our peers. The conscience helps us to resolve such dilemmas in ways that are socially acceptable, and these Inuit parents seem to be deliberately “exercising” the consciences of their children to make morally socialized adults out of them. (226)

The Inuit-style moral dilemma games seem strange, even shocking, to people from industrialized societies, and so it’s clear they’re not a normal part of children’s upbringing in every culture. They don’t even seem to be all that common among hunter-gatherers outside the region of the Arctic. Boehm writes, however,

Deliberately and stressfully subjecting children to nasty hypothetical dilemmas is not universal among foraging nomads, but as we’ll see with Nisa, everyday life also creates real moral dilemmas that can involve Kalahari children similarly. (226)

Boehm goes on to recount an episode from anthropologist Marjorie Shostak’s famous biography Nisa: The Life and Words of a !Kung Womanto show that parents all the way on the opposite side of the world from where Briggs did her fieldwork sometimes light on similar methods for stimulating their children’s moral development.

Nisa seems to have been a greedy and impulsive child. When her pregnant mother tried to wean her, she would have none of it. At one point, she even went so far as to sneak into the hut while her mother was asleep and try to suckle without waking her up. Throughout the pregnancy, Nisa continually expressed ambivalence toward the upcoming birth of her sibling, so much so that her parents anticipated there might be some problems. The !Kung resort to infanticide in certain dire circumstances, and Nisa’s parents probably reasoned she was at least somewhat familiar with the coping mechanism many other parents used when killing a newborn was necessary. What they’d do is treat the baby as an object, not naming it or in any other way recognizing its identity as a family member. Nisa explained to Shostak how her parents used this knowledge to impart a lesson about her baby brother.

After he was born, he lay there, crying. I greeted him, “Ho, ho, my baby brother! Ho, ho, I have a little brother! Some day we’ll play together.” But my mother said, “What do you think this thing is? Why are you talking to it like that? Now, get up and go back to the village and bring me my digging stick.” I said, “What are you going to dig?” She said, “A hole. I’m going to dig a hole so I can bury the baby. Then you, Nisa, will be able to nurse again.” I refused. “My baby brother? My little brother? Mommy, he’s my brother! Pick him up and carry him back to the village. I don’t want to nurse!” Then I said, “I’ll tell Daddy when he comes home!” She said, “You won’t tell him. Now, run back and bring me my digging stick. I’ll bury him so you can nurse again. You’re much too thin.” I didn’t want to go and started to cry. I sat there, my tears falling, crying and crying. But she told me to go, saying she wanted my bones to be strong. So, I left and went back to the village, crying as I walked. (The weaning episode occurs on pgs. 46-57)

Again, this may strike us as cruel, but by threatening her brother’s life, Nisa’s mother succeeded in triggering her natural affection for him, thus tipping the scales of her ambivalence to ensure the protective and loving feelings won out over the bitter and jealous ones. This example was extreme enough that Nisa remembered it well into adulthood, but Boehm sees it as evidence that real life reliably offers up dilemmas parents all over the world can use to instill morals in their children. He writes,

I believe that all hunter-gatherer societies offer such learning experiences, not only in the real-life situations children are involved with, but also in those they merely observe. What the Inuit whom Briggs studied in Cumberland Sound have done is to not leave this up to chance. And the practice would appear to be widespread in the Arctic. Children are systematically exposed to life’s typical stressful moral dilemmas, and often hypothetically, as a training ground that helps to turn them into adults who have internalized the values of their groups. (234)

One of the reasons such dilemmas, whether real or hypothetical or merely observed, are effective as teaching tools is that they bypass the threat to personal autonomy that tends to accompany direct instruction. Imagine Tío Salamanca simply scolding Leonel for wishing his brother dead—it would have only aggravated his resentment and sparked defiance. Leonel would probably also harbor some bitterness toward his uncle for unjustly defending Marco. In any case, he would have been stubbornly resistant to the lesson.

Winston Churchill nailed the sentiment when he said, “Personally, I am always ready to learn, although I don’t always like being taught.” The Inuit-style moral dilemmas force the children to come up with the right answer on their own, a task that requires the integration and balancing of short and long term desires, individual and group interests, and powerful albeit contradictory emotions. The skills that go into solving such dilemmas are indistinguishable from the qualities we recognize as maturity, self-knowledge, generosity, poise, and wisdom.

For the children Briggs witnessed being subjected to these moral tests, the understanding that the dilemmas were in fact only hypothetical developed gradually as they matured. For the youngest ones, the stakes were real and the solutions were never clear at the onset. Briggs explains that

while the interaction between small children and adults was consistently good-humored, benign, and playful on the part of the adults, it taxed the children to—or beyond—the limits of their ability to understand, pushing them to expand their horizons, and testing them to see how much they had grown since the last encounter. (173)

What this suggests is that there isn’t always a simple declarative lesson—a moral to the story, as it were—imparted in these games. Instead, the solutions to the dilemmas can often be open-ended, and the skills the children practice can thus be more general and abstract than some basic law or principle. Briggs goes on,

Adult players did not make it easy for children to thread their way through the labyrinth of tricky proposals, questions, and actions, and they did not give answers to the children or directly confirm the conclusions the children came to. On the contrary, questioning a child’s first facile answers, they turned situations round and round, presenting first one aspect then another, to view. They made children realize their emotional investment in all possible outcomes, and then allowed them to find their own way out of the dilemmas that had been created—or perhaps, to find ways of living with unresolved dilemmas. Since children were unaware that the adults were “only playing,” they could believe that their own decisions would determine their fate. And since the emotions aroused in them might be highly conflicted and contradictory—love as well as jealousy, attraction as well as fear—they did not always know what they wanted to decide. (174-5)

As the children mature, they become more adept at distinguishing between real and hypothetical problems. Indeed, Briggs suggests one of the ways adults recognize children’s budding maturity is that they begin to treat the dilemmas as a game, ceasing to take them seriously, and ceasing to take themselves as seriously as they did when they were younger.

In his book On the Origin of Stories: Evolution, Cognition, and Fiction, literary scholar Brian Boyd theorizes that the fictional narratives that humans engage one another with in every culture all over the world, be they in the form of religious myths, folklore, or plays and novels, can be thought of as a type of cognitive play—similar to the hypothetical moral dilemmas of the Inuit. He sees storytelling as an adaption that encourages us to train the mental faculties we need to function in complex societies. The idea is that evolution ensures that adaptive behaviors tend to be pleasurable, and thus many animals playfully and joyously engage in activities in low-stakes, relatively safe circumstances that will prepare them to engage in similar activities that have much higher stakes and are much more dangerous. Boyd explains,

The more pleasure that creatures have in play in safe contexts, the more they will happily expend energy in mastering skills needed in urgent or volatile situations, in attack, defense, and social competition and cooperation. This explains why in the human case we particularly enjoy play that develops skills needed in flight (chase, tag, running) and fight (rough-and-tumble, throwing as a form of attack at a distance), in recovery of balance (skiing, surfing, skateboarding), and individual and team games. (92)

The skills most necessary to survive and thrive in human societies are the same ones Inuit adults help children develop with the hypothetical dilemma’s Briggs describes. We should expect fiction then to feature similar types of moral dilemmas. Some stories may be designed to convey simple messages—“Don’t hurt your brother,” “Don’t stray from the path”—but others might be much more complicated; they may not even have any viable solutions at all. “Art prepares minds for open-ended learning and creativity,” Boyd writes; “fiction specifically improves our social cognition and our thinking beyond the here and now” (209).

One of the ways the cognitive play we call novels or TV shows differs from Inuit dilemma games is that the fictional characters take over center stage from the individual audience members. Instead of being forced to decide on a course of action ourselves, we watch characters we’ve become emotionally invested in try to come up with solutions to the dilemmas. When these characters are first introduced to us, our feelings toward them will be based on the same criteria we’d apply to real people who could potentially become a part of our social circles. Boyd explains,

Even more than other social species, we depend on information about others’ capacities, dispositions, intentions, actions, and reactions. Such “strategic information” catches our attention so forcefully that fiction can hold our interest, unlike almost anything else, for hours at a stretch. (130)

We favor characters who are good team players—who communicate honestly, who show concern for others, and who direct aggression toward enemies and cheats—for obvious reasons, but we also assess them in terms of what they might contribute to the group. Characters with exceptional strength, beauty, intelligence, or artistic ability are always especially attention-worthy. Of course, characters with qualities that make them sometimes an asset and sometimes a liability represent a moral dilemma all on their own—it’s no wonder such characters tend to be so compelling.

The most common fictional dilemma pits a character we like against one or more characters we hate—the good team player versus the power- or money-hungry egoist. We can think of the most straightforward plot as an encroachment of chaos on the providential moral order we might otherwise take for granted. When the bad guy is finally defeated, it’s like a toy that was snatched away from us has just been returned. We embrace the moral order all the more vigorously. But of course our stories aren’t limited to this one basic formula. Around the turn of the last century, the French writer Georges Polti, following up on the work of Italian playwright Carlo Gozzi, tried to write a comprehensive list of all the basic plots in plays and novels, and flipping through his book The Thirty-Six Dramatic Situations, you find that with few exceptions (“Daring Enterprise,” “The Enigma,” “Recovery of a Lost One”) the situations aren’t simply encounters between characters with conflicting goals, or characters who run into obstacles in chasing after their desires. The conflicts are nearly all moral, either between a virtuous character and a less virtuous one or between selfish or greedy impulses and more altruistic ones. Polti’s book could be called The Thirty-Odd Moral Dilemmas in Fiction. Hector Salamanca would be happy (not really) to see the thirteenth situation: “Enmity of Kinsmen,” the first example of which is “Hatred of Brothers” (49).

One type of fictional dilemma that seems to be particularly salient in American society today pits our impulse to punish wrongdoers against our admiration for people with exceptional abilities. Characters like Walter White in Breaking Bad win us over with qualities like altruism, resourcefulness, and ingenuity—but then they go on to behave in strikingly, though somehow not obviously, immoral ways. Variations on Conan-Doyle’s Sherlock Holmes abound; he’s the supergenius who’s also a dick (get the double-entendre?): the BBC’s Sherlock (by far the best), the movies starring Robert Downey Jr., the upcoming series featuring an Asian female Watson (Lucy Liu)—plus all the minor variations like The Mentalist and House

Though the idea that fiction is a type of low-stakes training simulation to prepare people cognitively and emotionally to take on difficult social problems in real life may not seem all that earthshattering, conceiving of stories as analogous to Inuit moral dilemmas designed to exercise children’s moral reasoning faculties can nonetheless help us understand why worries about the examples set by fictional characters are so often misguided. Many parents and teachers noisily complain about sex or violence or drug use in media. Academic literary critics condemn the way this or that author portrays women or minorities. Underlying these concerns is the crude assumption that stories simply encourage audiences to imitate the characters, that those audiences are passive receptacles for the messages—implicit or explicit—conveyed through the narrative. To be fair, these worries may be well placed when it comes to children so young they lack the cognitive sophistication necessary for separating their thoughts and feelings about protagonists from those they have about themselves, and are thus prone to take the hero for a simple model of emulation-worthy behavior. But, while Inuit adults communicate to children that they can be trusted to arrive at a right or moral solution, the moralizers in our culture betray their utter lack of faith in the intelligence and conscience of the people they try to protect from the corrupting influence of stories with imperfect or unsavory characters. 

           This type of self-righteous and overbearing attitude toward readers and viewers strikes me as more likely by orders of magnitude to provoke defiant resistance to moral lessons than the North Baffin’s isumaqsayuq approach. In other words, a good story is worth a thousand sermons. But if the moral dilemma at the core of the plot has an easy solution—if you can say precisely what the moral of the story is—it’s probably not a very good story.

Also read

The Criminal Sublime: Walter White's Brutally Plausible Journey to the Heart of Darkness in Breaking Bad

And

SYMPATHIZING WITH PSYCHOS: WHY WE WANT TO SEE ALEX ESCAPE HIS FATE AS A CLOCKWORK ORANGE

And

SABBATH SAYS: PHILIP ROTH AND THE DILEMMAS OF IDEOLOGICAL CASTRATION

Read More
Dennis Junk Dennis Junk

The Imp of the Underground and the Literature of Low Status

A famous scene in “Notes from the Underground” echoes a famous study comparing people’s responses to an offense. What are the implications for behavior and personality of having low social status, and how does that play out in fiction? Is Poe’s “Imp of the Perverse” really just an example of our inborn defiance, our raging against the machine?

The one overarching theme in literature, and I mean all literature since there’s been any to speak of, is injustice. Does the girl get the guy she deserves? If so, the work is probably commercial, as opposed to literary, fiction. If not, then the reason begs pondering. Maybe she isn’t pretty enough, despite her wit and aesthetic sophistication, so we’re left lamenting the shallowness of our society’s males. Maybe she’s of a lower caste, despite her unassailable virtue, in which case we’re forced to question our complacency before morally arbitrary class distinctions. Or maybe the timing was just off—cursed fate in all her fickleness. Another literary work might be about the woman who ends up without the fulfilling career she longed for and worked hard to get, in which case we may blame society’s narrow conception of femininity, as evidenced by all those damn does-the-girl-get-the-guy stories.

            The prevailing theory of what arouses our interest in narratives focuses on the characters’ goals, which magically, by some as yet undiscovered cognitive mechanism, become our own. But plots often catch us up before any clear goals are presented to us, and our partisanship on behalf of a character easily endures shifting purposes. We as readers and viewers are not swept into stories through the transubstantiation of someone else’s striving into our own, with the protagonist serving as our avatar as we traverse the virtual setting and experience the pre-orchestrated plot. Rather, we reflexively monitor the character for signs of virtue and for a capacity to contribute something of value to his or her community, the same way we, in our nonvirtual existence, would monitor and assess a new coworker, classmate, or potential date. While suspense in commercial fiction hinges on high-stakes struggles between characters easily recognizable as good and those easily recognizable as bad, and comfortably condemnable as such, forward momentum in literary fiction—such as it is—depends on scenes in which the protagonist is faced with temptations, tests of virtue, moral dilemmas.

The strain and complexity of coming to some sort of resolution to these dilemmas often serves as a theme in itself, a comment on the mad world we live in, where it’s all but impossible to discern between right and wrong. Indeed, the most common emotional struggle depicted in literature is that between the informal, even intimate handling of moral evaluation—which comes natural to us owing to our evolutionary heritage as a group-living species—and the official, systematized, legal or institutional channels for determining merit and culpability that became unavoidable as societies scaled up exponentially after the advent of agriculture. These burgeoning impersonal bureaucracies are all too often ill-equipped to properly weigh messy mitigating factors, and they’re all too vulnerable to subversion by unscrupulous individuals who know how to game them. Psychopaths who ought to be in prison instead become CEOs of multinational investment firms, while sensitive and compassionate artists and humanitarians wind up taking lowly day jobs at schools or used book stores. But the feature of institutions and bureaucracies—and of complex societies more generally—that takes the biggest toll on our Pleistocene psyches, the one that strikes us as the most glaring injustice, is their stratification, their arrangement into steeply graded hierarchies.

Unlike our hierarchical ape cousins, all present-day societies still living in small groups as nomadic foragers, like those our ancestors lived in throughout the epoch that gave rise to the suite of traits we recognize as uniquely human, collectively enforce an ethos of egalitarianism. As anthropologist Christopher Boehm explains in his book Hierarchy in the Forest: The Evolution of Egalitarianism,

Even though individuals may be attracted personally to a dominant role, they make a common pact which says that each main political actor will give up his modest chances of becoming alpha in order to be certain that no one will ever be alpha over him. (105)

Since humans evolved from a species that was ancestral to both chimpanzees and gorillas, we carry in us many of the emotional and behavioral capacities that support hierarchies. But, during all those millennia of egalitarianism, we also developed an instinctive distaste for behaviors that undermine an individual’s personal sovereignty. “On their list of serious moral transgressions,” Boehm explains,

hunter-gathers regularly proscribe the enactment of behavior that is politically overbearing. They are aiming at upstarts who threaten the autonomy of other group members, and upstartism takes various forms. An upstart may act the bully simply because he is disposed to dominate others, or he may become selfishly greedy when it is time to share meat, or he may want to make off with another man’s wife by threat or by force. He (or sometimes she) may also be a respected leader who suddenly begins to issue direct orders… An upstart may simply take on airs of superiority, or may aggressively put others down and thereby violate the group’s idea of how its main political actors should be treating one another. (43)

In a band of thirty people, it’s possible to keep a vigilant eye on everyone and head off potential problems. But, as populations grow, encounters with strangers in settings where no one knows one another open the way for threats to individual autonomy and casual insults to personal dignity. And, as professional specialization and institutional complexity increase in pace with technological advancement, power structures become necessary for efficient decision-making. Economic inequality then takes hold as a corollary of professional inequality.

None of this is to suggest that the advance of civilization inevitably leads to increasing injustice. In fact, per capita murder rates are much higher in hunter-gatherer societies. Nevertheless, the impersonal nature of our dealings with others in the modern world often strikes us as overly conducive to perverse incentives and unfair outcomes. And even the most mundane signals of superior status or the most subtle expressions of power, though officially sanctioned, can be maddening. Compare this famous moment in literary history to Boehm’s account of hunter-gatherer political philosophy:

I was standing beside the billiard table, blocking the way unwittingly, and he wanted to pass; he took me by the shoulders and silently—with no warning or explanation—moved me from where I stood to another place, and then passed by as if without noticing. I could have forgiven a beating, but I simply could not forgive his moving me and in the end just not noticing me. (49)

The billiard player's failure to acknowledge his autonomy outrages the narrator, who then considers attacking the man who has treated him with such disrespect. But he can’t bring himself to do it. He explains,

I turned coward not from cowardice, but from the most boundless vanity. I was afraid, not of six-foot-tallness, nor of being badly beaten and chucked out the window; I really would have had physical courage enough; what I lacked was sufficient moral courage. I was afraid that none of those present—from the insolent marker to the last putrid and blackhead-covered clerk with a collar of lard who was hanging about there—would understand, and that they would all deride me if I started protesting and talking to them in literary language. Because among us to this day it is impossible to speak of a point of honor—that is, not honor, but a point of honor (point d’honneur) otherwise than in literary language. (50)

The languages of law and practicality are the only ones whose legitimacy is recognized in modern societies. The language of morality used to describe sentiments like honor has been consigned to literature. This man wants to exact his revenge for the slight he suffered, but that would require his revenge to be understood by witnesses as such. The derision he can count on from all the bystanders would just compound the slight. In place of a close-knit moral community, there is only a loose assortment of strangers. And so he has no recourse.

            The character in this scene could be anyone. Males may be more keyed into the physical dimension of domination and more prone to react with physical violence, but females likewise suffer from slights and belittlements, and react aggressively, often by attacking their tormenter's reputation through gossip. Treating a person of either gender as an insensate obstacle is easier when that person is a stranger you’re unlikely ever to encounter again. But another dynamic is at play in the scene which makes it still easier—almost inevitable. After being unceremoniously moved aside, the narrator becomes obsessed with the man who treated him so dismissively. Desperate to even the score, he ends up stalking the man, stewing resentfully, trying to come up with a plan. He writes,

And suddenly… suddenly I got my revenge in the simplest, the most brilliant way! The brightest idea suddenly dawned on me. Sometimes on holidays I would go to Nevsky Prospect between three and four, and stroll along the sunny side. That is, I by no means went strolling there, but experienced countless torments, humiliations and risings of bile: that must have been just what I needed. I darted like an eel among the passers-by, in a most uncomely fashion, ceaselessly giving way now to generals, now to cavalry officers and hussars, now to ladies; in those moments I felt convulsive pains in my heart and a hotness in my spine at the mere thought of the measliness of my attire and the measliness and triteness of my darting little figure. This was a torment of torments, a ceaseless, unbearable humiliation from the thought, which would turn into a ceaseless and immediate sensation, of my being a fly before that whole world, a foul, obscene fly—more intelligent, more developed, more noble than everyone else—that went without saying—but a fly, ceaselessly giving way to everyone, humiliated by everyone, insulted by everyone. (52)

So the indignity, it seems, was not borne of being moved aside like a piece of furniture so much as it was of being afforded absolutely no status. That’s why being beaten would have been preferable; a beating implies a modicum of worthiness in that it demands recognition, effort, even risk, no matter how slight.

            The idea that occurs to the narrator for the perfect revenge requires that he first remedy the outward signals of his lower social status, “the measliness of my attire and the measliness… of my darting little figure,” as he calls them. The catch is that to don the proper attire for leveling a challenge, he has to borrow money from a man he works with—which only adds to his daily feelings of humiliation. Psychologists Derek Rucker and Adam Galinsky have conducted experiments demonstrating that people display a disturbing readiness to compensate for feelings of powerlessness and low status by making pricy purchases, even though in the long run such expenditures only serve to perpetuate their lowly economic and social straits. The irony is heightened in the story when the actual revenge itself, the trappings for which were so dearly purchased, turns out to be so bathetic.

Suddenly, within three steps of my enemy, I unexpectedly decided, closed my eyes, and—we bumped solidly shoulder against shoulder! I did not yield an inch and passed by on perfectly equal footing! He did not even look back and pretended not to notice: but he only pretended, I’m sure of that. To this day I’m sure of it! Of course, I got the worst of it; he was stronger, but that was not the point. The point was that I had achieved my purpose, preserved my dignity, yielded not a step, and placed myself publicly on an equal social footing with him. I returned home perfectly avenged for everything. (55)

But this perfect vengeance has cost him not only the price of a new coat and hat; it has cost him a full two years of obsession, anguish, and insomnia as well. The implication is that being of lowly status is a constant psychological burden, one that makes people so crazy they become incapable of making rational decisions.

            Literature buffs will have recognized these scenes from Dostoevsky’s Notes from Underground (as translated by Richard Prevear and Larissa Volokhnosky), which satirizes the idea of a society based on the principle of “rational egotism” as symbolized by N.G. Chernyshevsky’s image of a “crystal palace” (25), a well-ordered utopia in which every citizen pursues his or her own rational self-interests. Dostoevsky’s underground man hates the idea because regardless of how effectively such a society may satisfy people’s individual needs the rigid conformity it would demand would be intolerable. The supposed utopia, then, could never satisfy people’s true interests. He argues,

That’s just the thing, gentlemen, that there may well exist something that is dearer for almost every man than his very best profit, or (so as not to violate logic) that there is this one most profitable profit (precisely the omitted one, the one we were just talking about), which is chiefer and more profitable than all other profits, and for which a man is ready, if need be, to go against all laws, that is, against reason, honor, peace, prosperity—in short, against all these beautiful and useful things—only so as to attain this primary, most profitable profit which is dearer to him than anything else. (22)

The underground man cites examples of people behaving against their own best interests in this section, which serves as a preface to the story of his revenge against the billiard player who so blithely moves him aside. The way he explains this “very best profit” which makes people like himself behave in counterproductive, even self-destructive ways is to suggest that nothing else matters unless everyone’s freedom to choose how to behave is held inviolate. He writes,

One’s own free and voluntary wanting, one’s own caprice, however wild, one’s own fancy, though chafed sometimes to the point of madness—all this is that same most profitable profit, the omitted one, which does not fit into any classification, and because of which all systems and theories are constantly blown to the devil… Man needs only independent wanting, whatever this independence may cost and wherever it may lead. (25-6)

Notes from Underground was originally published in 1864. But the underground man echoes, wittingly or not, the narrator of Edgar Allan Poe’s story from almost twenty years earlier, "The Imp of the Perverse," who posits an innate drive to perversity, explaining,

Through its promptings we act without comprehensible object. Or if this shall be understood as a contradiction in terms, we may so far modify the proposition as to say that through its promptings we act for the reason that we should not. In theory, no reason can be more unreasonable, but in reality there is none so strong. With certain minds, under certain circumstances, it becomes absolutely irresistible. I am not more sure that I breathe, than that the conviction of the wrong or impolicy of an action is often the one unconquerable force which impels us, and alone impels us, to its prosecution. Nor will this overwhelming tendency to do wrong for the wrong’s sake, admit of analysis, or resolution to ulterior elements. (403)

This narrator’s suggestion of the irreducibility of the impulse notwithstanding, it’s noteworthy how often the circumstances that induce its expression include the presence of an individual of higher status.

            The famous shoulder bump in Notes from Underground has an uncanny parallel in experimental psychology. In 1996, Dov Cohen, Richard Nisbett, and their colleagues published the research article, “Insult, Aggression, and the Southern Culture of Honor: An ‘Experimental Ethnography’,” in which they report the results of a comparison between the cognitive and physiological responses of southern males to being bumped in a hallway and casually called an asshole to those of northern males. The study showed that whereas men from northern regions were usually amused by the run-in, southern males were much more likely to see it as an insult and a threat to their manhood, and they were much more likely to respond violently. The cortisol and testosterone levels of southern males spiked—the clever experimental setup allowed meaures before and after—and these men reported believing physical confrontation was the appropriate way to redress the insult. The way Cohen and Nisbett explain the difference is that the “culture of honor” that emerges in southern regions originally developed as a safeguard for men who lived as herders. Cultures that arise in farming regions place less emphasis on manly honor because farmland is difficult to steal. But if word gets out that a herder is soft then his livelihood is at risk. Cohen and Nisbett write,

Such concerns might appear outdated for southern participants now that the South is no longer a lawless frontier based on a herding economy. However, we believe these experiments may also hint at how the culture of honor has sustained itself in the South. It is possible that the culture-of-honor stance has become “functionally autonomous” from the material circumstances that created it. Culture of honor norms are now socially enforced and perpetuated because they have become embedded in social roles, expectations, and shared definitions of manhood. (958)

            More recently, in a 2009 article titled “Low-Status Compensation: A Theory for Understanding the Role of Status in Cultures of Honor,” psychologist P.J. Henry takes another look at Cohen and Nisbett’s findings and offers another interpretation based on his own further experimentation. Henry’s key insight is that herding peoples are often considered to be of lower status than people with other professions and lifestyles. After establishing that the southern communities with a culture of honor are often stigmatized with negative stereotypes—drawling accents signaling low intelligence, high incidence of incest and drug use, etc.—both in the minds of outsiders and those of the people themselves, Henry suggests that a readiness to resort to violence probably isn’t now and may not ever have been adaptive in terms of material benefits.

An important perspective of low-status compensation theory is that low status is a stigma that brings with it lower psychological worth and value. While it is true that stigma also often accompanies lower economic worth and, as in the studies presented here, is sometimes defined by it (i.e., those who have lower incomes in a society have more of a social stigma compared with those who have higher incomes), low-status compensation theory assumes that it is psychological worth that is being protected, not economic or financial worth. In other words, the compensation strategies used by members of low-status groups are used in the service of psychological self-protection, not as a means of gaining higher status, higher income, more resources, etc. (453)

And this conception of honor brings us closer to the observations of the underground man and Poe’s boastful murderer. If psychological worth is what’s being defended, then economic considerations fall by the wayside. Unfortunately, since our financial standing tends to be so closely tied to our social standing, our efforts to protect our sense of psychological worth have a nasty tendency to backfire in the long run.

            Henry found evidence for the importance of psychological reactance, as opposed to cultural norms, in causing violence when he divided participants of his study into either high or low status categories and then had them respond to questions about how likely they would be to respond to insults with physical aggression. But before being asked about the propriety of violent reprisals half of the members of each group were asked to recall as vividly as they could a time in their lives when they felt valued by their community. Henry describes the findings thus:

When lower status participants were given the opportunity to validate their worth, they were less likely to endorse lashing out aggressively when insulted or disrespected. Higher status participants were unaffected by the manipulation. (463)

The implication is that people who feel less valuable than others, a condition that tends to be associated with low socioeconomic status, are quicker to retaliate because they are almost constantly on-edge, preoccupied at almost every moment with assessments of their standing in relation to others. Aside from a readiness to engage in violence, this type of obsessive vigilance for possible slights, and the feeling of powerlessness that attends it, can be counted on to keep people in a constant state of stress. The massive longitudinal study of British Civil Service employees called the Whitehall Study, which tracks the health outcomes of people at the various levels of the bureaucratic hierarchy, has found that the stress associated with low status also has profound effects on our physical well-being.  

            Though it may seem that violence-prone poor people occupying lowly positions on societal and professional totem poles are responsible for aggravating and prolonging their own misery because they tend to spend extravagantly and lash out at their perceived overlords with nary a concern for the consequences, the regularity with which low status leads to self-defeating behavior suggests the impulses are much more deeply rooted than some lazily executed weighing of pros and cons. If the type of wealth or status inequality the underground man finds himself on the short end of would have begun to take root in societies like the ones Christopher Boehm describes, a high-risk attempt at leveling the playing field would not only have been understandable—it would have been morally imperative. In a group of nomadic foragers, though, a man endeavoring to knock a would-be alpha down a few pegs would be able to count on the endorsement of most of the other group members. And the success rate for re-establishing and maintaining egalitarianism would have been heartening. Today, we are forced to live with inequality, even though beyond a certain point most people (regardless of political affiliation) see it as an injustice. 

            Some of the functions of literature, then, are to help us imagine just how intolerable life on the bottom can be, sympathize with those who get trapped in downward spirals of self-defeat, and begin to imagine what a more just and equitable society might look like. The catch is that we will be put off by characters who mistreat others or simply show a dearth of redeeming qualities.

Also read

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

and

CAN’T WIN FOR LOSING: WHY THERE ARE SO MANY LOSERS IN LITERATURE AND WHY IT HAS TO CHANGE

Read More
Dennis Junk Dennis Junk

The People Who Evolved Our Genes for Us: Christopher Boehm on Moral Origins – Part 3 of A Crash Course in Multilevel Selection Theory

In “Moral Origins,” anthropologist Christopher Boehm lays out the mind-blowing theory that humans evolved to be cooperative in large part by developing mechanisms to keep powerful men’s selfish impulses in check. These mechanisms included, in rare instances, capital punishment. Once the free-rider problem was addressed, groups could function more as a unit than as a collection of individuals.

In a 1969 account of her time in Labrador studying the culture of the Montagnais-Naskapi people, anthropologist Eleanor Leacock describes how a man named Thomas, who was serving as her guide and informant, responded to two men they encountered while far from home on a hunting trip. The men, whom Thomas recognized but didn’t know very well, were on the brink of starvation. Even though it meant ending the hunting trip early and hence bringing back fewer furs to trade, Thomas gave the hungry men all the flour and lard he was carrying. Leacock figured that Thomas must have felt at least somewhat resentful for having to cut short his trip and that he was perhaps anticipating some return favor from the men in the future. But Thomas didn’t seem the least bit reluctant to help or frustrated by the setback. Leacock kept pressing him for an explanation until he got annoyed with her probing. She writes,

This was one of the very rare times Thomas lost patience with me, and he said with deep, if suppressed anger, “suppose now, not to give them flour, lard—just dead inside.” More revealing than the incident itself were the finality of his tone and the inference of my utter inhumanity in raising questions about his action. (Quoted in Boehm 219)

The phrase “just dead inside” expresses how deeply internalized the ethic of sympathetic giving is for people like Thomas who live in cultures more similar to those our earliest human ancestors created at the time, around 45,000 years ago, when they began leaving evidence of engaging in all the unique behaviors that are the hallmarks of our species. The Montagnais-Naskapi don’t qualify as an example of what anthropologist Christopher Boehm labels Late Pleistocene Appropriate, or LPA, cultures because they had been involved in fur trading with people from industrialized communities going back long before their culture was first studied by ethnographers. But Boehm includes Leacock’s description in his book Moral Origins: The Evolution of Virtue, Altruism, and Shame because he believes Thomas’s behavior is in fact typical of nomadic foragers and because, infelicitously for his research, standard ethnographies seldom cover encounters like the one Thomas had with those hungry acquaintances of his.

In our modern industrialized civilization, people donate blood, volunteer to fight in wars, sign over percentages of their income to churches, and pay to keep organizations like Doctors without Borders and Human Rights Watch in operation even though the people they help live in far-off countries most of us will never visit. One approach to explaining how this type of extra-familial generosity could have evolved is to suggest people who live in advanced societies like ours are, in an important sense, not in their natural habitat. Among evolutionary psychologists, it has long been assumed that in humans’ ancestral environments, most of the people individuals encountered would either be close kin who carried many genes in common, or at the very least members of a moderately stable group they could count on running into again, at which time they would be disposed to repay any favors. Once you take kin selection and reciprocal altruism into account, the consensus held, there was not much left to explain. Whatever small acts of kindness that weren’t directed toward kin or done with an expectation of repayment were, in such small groups, probably performed for the sake of impressing all the witnesses and thus improving the social status of the performer. As the biologist Michael Ghiselin once famously put it, “Scratch an altruist and watch a hypocrite bleed.” But this conception of what evolutionary psychologists call the Environment of Evolutionary Adaptedness, or EEA, never sat right with Boehm.

One problem with the standard selfish gene scenario that has just recently come to light is that modern hunter-gatherers, no matter where in the world they live, tend to form bands made up of high percentages of non-related or distantly related individuals. In an article published in Science in March of 2011, anthropologist Kim Hill and his colleagues report the findings of their analysis of thirty-two hunter-gatherer societies. The main conclusion of the study is that the members of most bands are not closely enough related for kin selection to sufficiently account for the high levels of cooperation ethnographers routinely observe. Assuming present-day forager societies are representative of the types of groups our Late Pleistocene ancestors lived in, we can rule out kin selection as a likely explanation for altruism of the sort displayed by Thomas or by modern philanthropists in complex civilizations. Boehm offers us a different scenario, one that relies on hypotheses derived from ethological studies of apes and archeological records of our human prehistory as much as on any abstract mathematical accounting of the supposed genetic payoffs of behaviors.

In three cave paintings discovered in Spain that probably date to the dawn of the Holocene epoch around 12,000 years ago, groups of men are depicted with what appear to be bows lifted above their heads in celebration while another man lay dead nearby with one arrow from each of them sticking out of his body. We can only speculate about what these images might have meant to the people who created them, but Boehm points out that all extant nomadic foraging peoples, no matter what part of the world they live in, are periodically forced to reenact dramas that resonate uncannily well with these scenes portrayed in ancient cave art. “Given enough time,” he writes, “any band society is likely to experience a problem with a homicide-prone unbalanced individual. And predictably band members will have to solve the problem by means of execution” (253). One of the more gruesome accounts of such an incident he cites comes from Richard Lee’s ethnography of !Kung Bushmen. After a man named /Twi had killed two men, Lee writes, “A number of people decided that he must be killed.” According to Lee’s informant, a man named =Toma (the symbols before the names represent clicks), the first attempt to kill /Twi was botched, allowing him to return to his hut, where a few people tried to help him. But he ended up becoming so enraged that he grabbed a spear and stabbed a woman in the face with it. When the woman’s husband came to her aid, /Twi shot him with a poisoned arrow, killing him and bringing his total body count to four. =Toma continues the story,

Now everyone took cover, and others shot at /Twi, and no one came to his aid because all those people had decided he had to die. But he still chased after some, firing arrows, but he didn’t hit any more…Then they all fired on him with poisoned arrows till he looked like a porcupine. Then he lay flat. All approached him, men and women, and stabbed his body with spears even after he was dead. (261-2)

The two most important elements of this episode for Boehm are the fact that the death sentence was arrived at through a partial group consensus which ended up being unanimous, and that it was carried out with weapons that had originally been developed for hunting. But this particular case of collectively enacted capital punishment was odd not just in how clumsy it was. Boehm writes,

In this one uniquely detailed description of what seems to begin as a delegated execution and eventually becomes a fully communal killing, things are so chaotic that it’s easy to understand why with hunter-gatherers the usual mode of execution is to efficiently delegate a kinsman to quickly kill the deviant by ambush. (261)

The prevailing wisdom among evolutionary psychologists has long been that any appearance of group-level adaptation, like the collective killing of a dangerous group member, must be an illusory outcome caused by selection at the level of individuals or families. As Steven Pinker explains, “If a person has innate traits that encourage him to contribute to the group’s welfare and as a result contribute to his own welfare, group selection is unnecessary; individual selection in the context of group living is adequate.” To demonstrate that some trait or behavior humans reliably engage in really is for the sake of the group as opposed to the individual engaging in it, there would have to be some conflict between the two motives—serving the group would have to entail incurring some kind of cost for the individual. Pinker explains,

It’s only when humans display traits that are disadvantageous to themselves while benefiting their group that group selection might have something to add. And this brings us to the familiar problem which led most evolutionary biologists to reject the idea of group selection in the 1960s. Except in the theoretically possible but empirically unlikely circumstance in which groups bud off new groups faster than their members have babies, any genetic tendency to risk life and limb that results in a net decrease in individual inclusive fitness will be relentlessly selected against. A new mutation with this effect would not come to predominate in the population, and even if it did, it would be driven out by any immigrant or mutant that favored itself at the expense of the group.

The ever-present potential for cooperative or altruistic group norms to be subverted by selfish individuals keen on exploitation is known in game theory as the free rider problem. To see how strong selfish individuals can lord over groups of their conspecifics we can look to the hierarchically organized bands great apes naturally form.

In groups of chimpanzees, for instance, an alpha male gets to eat his fill of the most nutritious food, even going so far at times as seizing meat from the subordinates who hunted it down. The alpha chimp also works to secure, as best he can, sole access to reproductively receptive females. For a hierarchical species like this, status is a winner-take-all competition, and so genes for dominance and cutthroat aggression proliferate. Subordinates tolerate being bullied because they know the more powerful alpha will probably kill them if they try to stand up for themselves. If instead of mounting some ill-fated resistance, however, they simply bide their time, they may eventually grow strong enough to more effectively challenge for the top position. Meanwhile, they can also try to sneak off with females to couple behind the alpha’s back. Boehm suggests that two competing motives keep hierarchies like this in place: one is a strong desire for dominance and the other is a penchant for fear-based submission. What this means is that subordinates only ever submit ambivalently. They even have a recognizable vocalization, which Boehm transcribes as the “waa,” that they use to signal their discontent. In his 1999 book Hierarchy in the Forest: The Evolution of Egalitarian Behavior, Boehm explains,

When an alpha male begins to display and a subordinate goes screaming up a tree, we may interpret this as a submissive act of fear; but when that same subordinate begins to waa as the display continues, it is an open, hostile expression of insubordination. (167)

Since the distant ancestor humans shared in common with chimpanzees likely felt this same ambivalence toward alphas, Boehm theorizes that it served as a preadaptation for the type of treatment modern human bullies can count on in every society of nomadic foragers anthropologists have studied. “I believe,” he writes, “that a similar emotional and behavioral orientation underlies the human moral community’s labeling of domination behaviors as deviant” (167).

Boehm has found accounts of subordinate chimpanzees, bonobos, and even gorillas banding together with one or more partner to take on an excessively domineering alpha—though there was only one case in which this happened with gorillas and the animals in question lived in captivity. But humans are much better at this type of coalition building. Two of the most crucial developments in our own lineage that lead to the differences in social organization between ourselves and the other apes were likely to have been an increased capacity for coordinated hunting and the invention of weapons designed to kill big game. As Boehm explains,

Weapons made possible not only killing at a distance, but far more effective threat behavior; brandishing a projectile could turn into an instant lethal attack with relatively little immediate risk to the attacker. (175)

Deadly weapons fundamentally altered the dynamic between lone would-be bullies and those they might try to dominate. As Boehm points out, “after weapons arrived, the camp bully became far more vulnerable” (177). With the advent of greater coalition-building skills and the invention of tools for efficient killing, the opportunities for an individual to achieve alpha status quickly vanished.

            It’s dangerous to assume that any one group of modern people provides the key to understanding our Pleistocene ancestors, but when every group living with similar types of technology and subsistence methods as those ancestors follows a similar pattern it’s much more suggestive. “A distinctively egalitarian political style,” Boehm writes, “is highly predictable wherever people live in small, locally autonomous social and economic groups” (35-6). This egalitarianism must be vigilantly guarded because “A potential bully always seems to be waiting in the wings” (68). Boehm explains what he believes is the underlying motivation,

Even though individuals may be attracted personally to a dominant role, they make a common pact which says that each main political actor will give up his modest chances of becoming alpha in order to be certain that no one will ever be alpha over him. (105)

The methods used to prevent powerful or influential individuals from acquiring too much control include such collective behaviors as gossiping, ostracism, banishment, and even, in extreme cases, execution. “In egalitarian hierarchies the pyramid of power is turned upside down,” Boehm explains, “with a politically united rank and file dominating the alpha-male types” (66).

The implications for theories about our ancestors are profound. The groups humans were living in as they evolved the traits that made them what we recognize today as human were highly motivated and well-equipped to both prevent and when necessary punish the type of free-riding that evolutionary psychologists and other selfish gene theorists insist would undermine group cohesion. Boehm makes this point explicit, writing,

The overall hypothesis is straightforward: basically, the advent of egalitarianism shifted the balance of forces within natural selection so that within-group selection was substantially debilitated and between-group selection was amplified. At the same time, egalitarian moral communities found themselves uniquely positioned to suppress free-riding… at the level of phenotype. With respect to the natural selection of behavior genes, this mechanical formula clearly favors the retention of altruistic traits. (199)

This is the point where he picks up the argument again in Moral Origins. The story of the homicidal man named /Twi is an extreme example of the predictable results of overly aggressive behaviors. Any nomadic forager who intransigently tries to throw his weight around the way alpha male chimpanzees do will probably end up getting “porcupined” (158) like /Twi and the three men depicted in the Magdalenian cave art in Spain.

Murder is an extreme example of the types of free-riding behavior that nomadic foragers reliably sanction. Any politically overbearing treatment of group mates, particularly the issuing of direct commands, is considered a serious moral transgression. But alongside this disapproval of bossy or bullying behavior there exists an ethic of sharing and generosity, so people who are thought to be stingy are equally disliked. As Boehm writes in Hierarchy in the Forest, “Politically egalitarian foragers are also, to a significant degree, materially egalitarian” (70). The image many of us grew up with of the lone prehistoric male hunter going out to stalk his prey, bringing it back as a symbol of his prowess in hopes of impressing beautiful and fertile females, turns out to be completely off-base. In most hunter-gather groups, the males hunt in teams, and whatever they kill gets turned over to someone else who distributes the meat evenly among all the men so each of their families gets an equal portion. In some cultures, “the hunter who made the kill gets a somewhat larger share,” Boehm writes in Moral Origins, “perhaps as an incentive to keep him at his arduous task” (185). But every hunter knows that most of the meat he procures will go to other group members—and the sharing is done without any tracking of who owes whom a favor. Boehm writes,

The models tell us that the altruists who are helping nonkin more than they are receiving help must be “compensated” in some way, or else they—meaning their genes—will go out of business. What we can be sure of is that somehow natural selection has managed to work its way around these problems, for surely humans have been sharing meat and otherwise helping others in an unbalanced fashion for at least 45,000 years. (184)

Following biologist Richard Alexander, Boehm sees this type of group beneficial generosity as an example of “indirect reciprocity.” And he believes it functions as a type of insurance policy, or, as anthropologists call it, “variance reduction.” It’s often beneficial for an individual’s family to pay in, as it were, but much of the time people contribute knowing full well the returns will go to others.

Less extreme cases than the psychopaths who end up porcupined involve what Boehm calls “meat-cheaters.” A prominent character in Moral Origins is an Mbuti Pygmy man named Cephu, whose story was recounted in rich detail by the anthropologist Colin Turnbull. One of the cooperative hunting strategies the Pygmies use has them stretching a long net through the forest while other group members create a ruckus to scare animals into it. Each net holder is entitled to whatever runs into his section of the net, which he promptly spears to death. What Cephu did was sneak farther ahead of the other men to improve his chances of having an animal run into his section of the net before the others. Unfortunately for him, everyone quickly realized what was happening. Returning to the camp after depositing his ill-gotten gains in his hut, Cephu hears someone call out that he is an animal. Beyond that, everyone was silent. Turnbull writes,

Cephu walked into the group, and still nobody spoke. He went to where a youth was sitting in a chair. Usually he would have been offered a seat without his having to ask, and now he did not dare to ask, and the youth continued to sit there in as nonchalant a manner as he could muster. Cephu went to another chair where Amabosu was sitting. He shook it violently when Amabosu ignored him, at which point he was told, “Animals lie on the ground.” (Quoted 39)

Thus began the accusations. Cephu burst into tears and tried to claim that his repositioning himself in the line was an accident. No one bought it. Next, he made the even bigger mistake of trying to suggest he was entitled to his preferential position. “After all,” Turnbull writes, “was he not an important man, a chief, in fact, of his own band?” At this point, Manyalibo, who was taking the lead in bringing Cephu to task, decided that the matter was settled. He said that

there was obviously no use prolonging the discussion. Cephu was a big chief, and Mbuti never have chiefs. And Cephu had his own band, of which he was chief, so let him go with it and hunt elsewhere and be a chief elsewhere. Manyalibo ended a very eloquent speech with “Pisa me taba” (“Pass me the tobacco”). Cephu knew he was defeated and humiliated. (40)

The guilty verdict Cephu had to accept to avoid being banished from the band came with the sentence that he had to relinquish all the meat he brought home that day. His attempt at free-riding therefore resulted not only in a loss of food but also in a much longer-lasting blow to his reputation.

Boehm has built a large database from ethnographic studies like Lee’s and Turnbull’s, and it shows that in their handling of meat-cheaters and self-aggrandizers nomadic foragers all over the world use strategies similar to those of the Pygmies. First comes the gossip about your big ego, your dishonesty, or your cheating. Soon you’ll recognize a growing reluctance of other’s to hunt with you, or you’ll have a tough time wooing a mate. Next, you may be directly confronted by someone delegated by a quorum of group members. If you persist in your free-riding behavior, especially if it entails murder or serious attempts at domination, you’ll probably be ambushed and turned into a porcupine. Alexander put forth the idea of “reputational selection,” whereby individuals benefit in terms of survival and reproduction from being held in high esteem by their group mates. Boehm prefers the term “social selection,” however, because it encompasses the idea that people are capable of figuring out what’s best for their groups and codifying it in their culture. How well an individual internalizes a group’s norms has profound effects on his or her chances for survival and reproduction. Boehm’s theory is that our consciences are the mechanisms we’ve evolved for such internalization.

Though there remain quite a few chicken-or-egg conundrums to work out, Boehm has cobbled together archeological evidence from butchering cites, primatological evidence from observations of apes in the wild and in captivity, and quantitative analyses of ethnographic records to put forth a plausible history of how our consciences evolved and how we became so concerned for the well-being of people we may barely know. As humans began hunting larger game, demanding greater coordination and more effective long-distance killing tools, an already extant resentment of alphas expressed itself in collective suppression of bullying behavior. And as our developing capacity for language made it possible to keep track of each other’s behavior long-term it started to become important for everyone to maintain a reputation for generosity, cooperativeness, and even-temperedness. Boehm writes,

Ultimately, the social preferences of groups were able to affect gene pools profoundly, and once we began to blush with shame, this surely meant that the evolution of conscientious self-control was well under way. The final result was a full-blown, sophisticated modern conscience, which helps us to make subtle decisions that involve balancing selfish interests in food, power, sex, or whatever against the need to maintain a decent personal moral reputation in society and to feel socially valuable as a person. The cognitive beauty of having such a conscience is that it directly facilitates making useful social decisions and avoiding negative social consequences. Its emotional beauty comes from the fact that we in effect bond with the values and rules of our groups, which means we can internalize our group’s mores, judge ourselves as well as others, and, hopefully, end up with self-respect. (173)

Social selection is actually a force that acts on individuals, selecting for those who can most strategically suppress their own selfish impulses. But in establishing a mechanism that guards the group norm of cooperation against free riders, it increased the potential of competition between groups and quite likely paved the way for altruism of the sort Leacock’s informant Thomas displayed. Boehm writes,

Thomas surely knew that if he turned down the pair of hungry men, they might “bad-mouth” him to people he knew and thereby damage his reputation as a properly generous man. At the same time, his costly generosity might very well be mentioned when they arrived back in their camp, and through the exchange of favorable gossip he might gain in his public esteem in his own camp. But neither of these socially expedient personal considerations would account for the “dead” feeling he mentioned with such gravity. He obviously had absorbed his culture’s values about sharing and in fact had internalized them so deeply that being selfish was unthinkable. (221)

In response to Ghiselin’s cynical credo, “Scratch an altruist and watch a hypocrite bleed,” Boehm points out that the best way to garner the benefits of kindness and sympathy is to actually be kind and sympathetic. He points out further that if altruism is being selected for at the level of phenotypes (the end-products of genetic processes) we should expect it to have an impact at the level of genes. In a sense, we’ve bred altruism into ourselves. Boehm writes,

If such generosity could be readily faked, then selection by altruistic reputation simply wouldn’t work. However, in an intimate band of thirty that is constantly gossiping, it’s difficult to fake anything. Some people may try, but few are likely to succeed. (189)

The result of the social selection dynamic that began all those millennia ago is that today generosity is in our bones. There are of course circumstances that can keep our generous impulses from manifesting themselves, and those impulses have a sad tendency to be directed toward members of our own cultural groups and no one else. But Boehm offers a slightly more optimistic formula than Ghiselin’s:

I do acknowledge that our human genetic nature is primarily egoistic, secondarily nepotistic, and only rather modestly likely to support acts of altruism, but the credo I favor would be “Scratch an altruist, and watch a vigilant and successful suppressor of free riders bleed. But watch out, for if you scratch him too hard, he and his group may retaliate and even kill you. (205)

Read Part 1:

A CRASH COURSE IN MULTI-LEVEL SELECTION THEORY: PART 1-THE GROUNDWORK LAID BY DAWKINS AND GOULD

And Part 2:

A CRASH COURSE IN MULTILEVEL SELECTION THEORY PART 2: STEVEN PINKER FALLS PREY TO THE AVERAGING FALLACY SOBER AND WILSON TRIED TO WARN HIM ABOUT

Also of interest:

THE FEMINIST SOCIOBIOLOGIST: AN APPRECIATION OF SARAH BLAFFER HRDY DISGUISED AS A REVIEW OF “MOTHERS AND OTHERS: THE EVOLUTIONARY ORIGINS OF MUTUAL UNDERSTANDING”

Read More
Dennis Junk Dennis Junk

A Crash Course in Multilevel Selection Theory part 2: Steven Pinker Falls Prey to the Averaging Fallacy Sober and Wilson Tried to Warn Him about

Eliot Sober and David Sloan Wilson’s “Unto Others” lays out a theoretical framework for how selection at the level of the group could have led to the evolution of greater cooperation among humans. They point out the mistake many theorists make in thinking because evolution can be defined as changes in gene frequencies, it’s only genes that matter. But that definition leaves aside the question of how traits and behaviors evolve, i.e. what dynamics lead to the changes in gene frequencies. Steven Pinker failed to grasp their point.

If you were a woman applying to graduate school at the University of California at Berkeley in 1973, you would have had a 35 percent chance of being accepted. If you were a man, your chances would have been significantly better. Forty-four percent of male applicants got accepted that year. Apparently, at this early stage of the feminist movement, even a school as notoriously progressive as Berkeley still discriminated against women. But not surprisingly, when confronted with these numbers, the women of the school were ready to take action to right the supposed injustice. After a lawsuit was filed charging admissions offices with bias, however, a department-by-department examination was conducted which produced a curious finding: not a single department admitted a significantly higher percentage of men than women. In fact, there was a small but significant trend in the opposite direction—a bias against men.

What this means is that somehow the aggregate probability of being accepted into grad school was dramatically different from the probabilities worked out through disaggregating the numbers with regard to important groupings, in this case the academic departments housing the programs assessing the applications. This discrepancy called for an explanation, and statisticians had had one on hand since 1951.

This paradoxical finding fell into place when it was noticed that women tended to apply to departments with low acceptance rates. To see how this can happen, imagine that 90 women and 10 men apply to a department with a 30 percent acceptance rate. This department does not discriminate and therefore accepts 27 women and 3 men. Another department, with a 60 percent acceptance rate, receives applications from 10 women and 90 men. This department doesn’t discriminate either and therefore accepts 6 women and 54 men. Considering both departments together, 100 men and 100 women applied, but only 33 women were accepted, compared with 57 men. A bias exists in the two departments combined, despite the fact that it does not exist in any single department, because the departments contribute unequally to the total number of applicants who are accepted. (25)

This is how the counterintuitive statistical phenomenon known as Simpson’s Paradox is explained by philosopher Elliott Sober and biologist David Sloan Wilson in their 1998 book Unto Others: The Evolution and Psychology of Unselfish Behavior, in which they argue that the same principle can apply to the relative proliferation of organisms in groups with varying percentages of altruists and selfish actors. In this case, the benefit to the group of having more altruists is analogous to the higher acceptance rates for grad school departments which tend to receive a disproportionate number of applications from men. And the counterintuitive outcome is that, in an aggregated population of groups, altruists have an advantage over selfish actors—even though within each of those groups selfish actors outcompete altruists.  

            Sober and Wilson caution that this assessment is based on certain critical assumptions about the population in question. “This model,” they write, “requires groups to be isolated as far as the benefits of altruism are concerned but nevertheless to compete in the formation of new groups” (29). It also requires that altruists and nonaltruists somehow “become concentrated in different groups” (26) so the benefits of altruism can accrue to one while the costs of selfishness accrue to the other. One type of group that follows this pattern is a family, whose members resemble each other in terms of their traits—including a propensity for altruism—because they share many of the same genes. In humans, families tend to be based on pair bonds established for the purpose of siring and raising children, forming a unit that remains stable long enough for the benefits of altruism to be of immense importance. As the children reach adulthood, though, they disperse to form their own family groups. Therefore, assuming families live in a population with other families, group selection ought to lead to the evolution of altruism.

            Sober and Wilson wrote Unto Others to challenge the prevailing approach to solving mysteries in evolutionary biology, which was to focus strictly on competition between genes. In place of this exclusive attention on gene selection, they advocate a pluralistic approach that takes into account the possibility of selection occurring at multiple levels, from genes to individuals to groups. This is where the term multilevel selection comes from. In certain instances, focusing on one level instead of another amounts to a mere shift in perspective. Looking at families as groups, for instance, leads to many of the same conclusions as looking at them in terms of vehicles for carrying genes.

William D. Hamilton, whose thinking inspired both Richard Dawkins’ Selfish Gene and E.O. Wilson’s Sociobiology, long ago explained altruism within families by setting forth the theory of kin selection, which posits that family members will at times behave in ways that benefit each other even at their own expense because the genes underlying the behavior don’t make any distinction between the bodies which happen to be carrying copies of themselves. Sober and Wilson write,

As we have seen, however, kin selection is a special case of a more general theory—a point that Hamilton was among the first to appreciate. In his own words, “it obviously makes no difference if altruists settle with altruists because they are related… or because they recognize fellow altruists as such, or settle together because of some pleiotropic effect of the gene on habitat preference.” We therefore need to evaluate human social behavior in terms of the general theory of multilevel selection, not the special case of kin selection. When we do this, we may discover that humans, bees, and corals are all group-selected, but for different reasons. (134)

A general proclivity toward altruism based on section at the level of family groups may look somewhat different from kin-selected altruism targeted solely at those who are recognized as close relatives. For obvious reasons, the possibility of group selection becomes even more important when it comes to explaining the evolution of altruism among unrelated individuals.

            We have to bear in mind that Dawkins’s selfish genes are only selfish with regard to concerning themselves with nothing but ensuring their own continued existence—by calling them selfish he never meant to imply they must always be associated with selfishness as a trait of the bodies they provide the blueprints for. Selfish genes, in other words, can sometimes code for altruistic behavior, as in the case of kin selection. So the question of what level selection operates on is much more complicated than it would be if the gene-focused approach predicted selfishness while the multilevel approach predicted altruism. But many strict gene selection advocates argue that because selfish gene theory can account for altruism in myriad ways there’s simply no need to resort to group selection. Evolution is, after all, changes over time in gene frequencies. So why should we look to higher levels?

            Sober and Wilson demonstrate that if you focus on individuals in their simple model of predominantly altruistic groups competing against predominantly selfish groups you will conclude that altruism is adaptive because it happens to be the trait that ends up proliferating. You may add the qualifier that it’s adaptive in the specified context, but the upshot is that from the perspective of individual selection altruism outcompetes selfishness. The problem is that this is the same reasoning underlying the misguided accusations against Berkley; for any individual in that aggregate population, it was advantageous to be a male—but there was never any individual selection pressure against females. Sober and Wilson write,

The averaging approach makes “individual selection” a synonym for “natural selection.” The existence of more than one group and fitness differences between the groups have been folded into the definition of individual selection, defining group selection out of existence. Group selection is no longer a process that can occur in theory, so its existence in nature is settled a priori. Group selection simply has no place in this semantic framework. (32)

Thus, a strict focus on individuals, though it may appear to fully account for the outcome, necessarily obscures a crucial process that went into producing it. The same logic might be applicable to any analysis based on gene-level accounting. Sober and Wilson write that

if the point is to understand the processes at work, the resultant is not enough. Simpson’s paradox shows how confusing it can be to focus only on net outcomes without keeping track of the component causal factors. This confusion is carried into evolutionary biology when the separate effects of selection within and between groups are expressed in terms of a single quantity. (33)

They go on to label this approach “the averaging fallacy.” Acknowledging that nobody explicitly insists that group selection is somehow impossible by definition, they still find countless instances in which it is defined out of existence in practice. They write,

Even though the averaging fallacy is not endorsed in its general form, it frequently occurs in specific cases. In fact, we will make the bold claim that the controversy over group selection and altruism in biology can be largely resolved simply by avoiding the averaging fallacy. (34)

            Unfortunately, this warning about the averaging fallacy continues to go unheeded by advocates of strict gene selection theories. Even intellectual heavyweights of the caliber of Steven Pinker fall into the trap. In a severely disappointing essay published just last month at Edge.org called “The False Allure of Group Selection,” Pinker writes

If a person has innate traits that encourage him to contribute to the group’s welfare and as a result contribute to his own welfare, group selection is unnecessary; individual selection in the context of group living is adequate. Individual human traits evolved in an environment that includes other humans, just as they evolved in environments that include day-night cycles, predators, pathogens, and fruiting trees.

Multilevel selectionists wouldn’t disagree with this point; they would readily explain traits that benefit everyone in the group at no cost to the individuals possessing them as arising through individual selection. But Pinker here shows his readiness to fold the process of group competition into some generic “context.” The important element of the debate, of course, centers on traits that benefit the group at the expense of the individual. Pinker writes,

Except in the theoretically possible but empirically unlikely circumstance in which groups bud off new groups faster than their members have babies, any genetic tendency to risk life and limb that results in a net decrease in individual inclusive fitness will be relentlessly selected against. A new mutation with this effect would not come to predominate in the population, and even if it did, it would be driven out by any immigrant or mutant that favored itself at the expense of the group.

But, as Sober and Wilson demonstrate, those self-sacrificial traits wouldn’t necessarily be selected against in the population. In fact, self-sacrifice would be selected for if that population is an aggregation of competing groups. Pinker fails to even consider this possibility because he’s determined to stick with the definition of natural selection as occurring at the level of genes.

            Indeed, the centerpiece of Pinker’s argument against group selection in this essay is his definition of natural selection. Channeling Dawkins, he writes that evolution is best understood as competition between “replicators” to continue replicating. The implication is that groups, and even individuals, can’t be the units of selection because they don’t replicate themselves. He writes,

The theory of natural selection applies most readily to genes because they have the right stuff to drive selection, namely making high-fidelity copies of themselves. Granted, it's often convenient to speak about selection at the level of individuals, because it’s the fate of individuals (and their kin) in the world of cause and effect which determines the fate of their genes. Nonetheless, it’s the genes themselves that are replicated over generations and are thus the targets of selection and the ultimate beneficiaries of adaptations.

The underlying assumption is that, because genes rely on individuals as “vehicles” to replicate themselves, individuals can sometimes be used as shorthand for genes when discussing natural selection. Since gene competition within an individual would be to the detriment of all the genes that individual carries and strives to pass on, the genes collaborate to suppress conflicts amongst themselves. The further assumption underlying Pinker’s and Dawkins’s reasoning is that groups make for poor vehicles because suppressing within group conflict would be too difficult. But, as Sober and Wilson write,

This argument does not evaluate group selection on a trait-by-trait basis. In addition, it begs the question of how individuals became such good vehicles of selection in the first place. The mechanisms that currently limit within-individual selection are not a happy coincidence but are themselves adaptions that evolved by natural selection. Genomes that managed to limit internal conflict presumably were more fit than other genomes, so these mechanisms evolve by between-genome selection. Being a good vehicle as Dawkins defines it is not a requirement for individual selection—it’s a product of individual selection. Similarly, groups do not have to be elaborately organized “superorganisms” to qualify as a unit of selection with respect to particular traits. (97)

The idea of a “trait-group” is exemplified by the simple altruistic group versus selfish group model they used to demonstrate the potential confusion arising from Simpson’s paradox. As long as individuals with the altruism trait interact with enough regularity for the benefits to be felt, they can be defined as a group with regard to that trait.

            Pinker makes several other dubious points in his essay, most of them based on the reasoning that group selection isn’t “necessary” to explain this or that trait, only justifying his prejudice in favor of gene selection with reference to the selfish gene definition of evolution. Of course, it may be possible to imagine gene-level explanations to behaviors humans engage in predictably, like punishing cheaters in economic interactions even when doing so means the punisher incurs some cost to him or herself. But Pinker is so caught up with replicators he overlooks the potential of this type of punishment to transform groups into functional vehicles. As Sober and Wilson demonstrate, group competition can lead to the evolution of altruism on its own. But once altruism reaches a certain threshold group selection can become even more powerful because the altruistic group members will, by definition, be better at behaving as a group. And one of the mechanisms we might expect to evolve through an ongoing process of group selection would operate to curtail within group conflict and exploitation. The costly punishment Pinker dismisses as possibly explicable through gene selection is much more likely to havearisen through group selection. Sober and Wilson delight in the irony that, “The entire language of social interactions among individuals in groups has been burrowed to describe genetic interactions within individuals; ‘outlaw’ genes, ‘sheriff’ genes, ‘parliaments’ of genes, and so on” (147).

Unto Others makes such a powerful case against strict gene-level explanations and for the potentially crucial role of group selection that anyone who undertakes to argue that the appeal of multilevel selection theory is somehow false without even mentioning it risks serious embarrassment. Published fourteen years ago, it still contains a remarkably effective rebuttal to Pinker’s essay:  

In short, the concept of genes as replicators, widely regarded as a decisive argument against group selection, is in fact totally irrelevant to the subject. Selfish gene theory does not invoke any processes that are different from the ones described in multilevel selection theory, but merely looks at the same processes in a different way. Those benighted group selectionists might be right in every detail; group selection could have evolved altruists that sacrifice themselves for the benefit of others, animals that regulate their numbers to avoid overexploiting their resources, and so on. Selfish gene theory calls the genes responsible for these behaviors “selfish” for the simple reason that they evolved and therefore replicated more successfully than other genes. Multilevel selection theory, on the other hand, is devoted to showing how these behaviors evolve. Fitness differences must exist somewhere in the biological hierarchy—between individuals within groups, between groups in the global population, and so on. Selfish gene theory can’t even begin to explore these questions on the basis of the replicator concept alone. The vehicle concept is its way of groping toward the very issues that multilevel selection theory was developed to explain. (88)

Sober and Wilson, in opening the field of evolutionary studies to forces beyond gene competition, went a long way toward vindicating Stephen Jay Gould, who throughout his career held that selfish gene theory was too reductionist—he even incorporated their arguments into his final book. But Sober and Wilson are still working primarily in the abstract realm of evolutionary modeling, although in the second half of Unto Others they cite multiple psychological and anthropological sources. A theorist even more after Gould’s own heart, one who synthesizes both models and evidence from multiple fields, from paleontology to primatology to ethnography, into a hypothetical account of the natural history of human evolution, from the ancestor we share with the great apes to modern nomadic foragers and beyond, is the anthropologist Christopher Boehm, whose work we’ll be exploring in part 3.

Read Part 1 of

A CRASH COURSE IN MULTI-LEVEL SELECTION THEORY: PART 1-THE GROUNDWORK LAID BY DAWKINS AND GOULD

And Part 3:

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

Read More
Dennis Junk Dennis Junk

A Crash Course in Multi-Level Selection Theory: Part 1-The Groundwork Laid by Dawkins and Gould

What is the unit of selection? Richard Dawkins famously argues that it’s genes that are selected for over the course of evolutionary change. Stephen Jay Gould, meanwhile, maintained that it must be individuals and even sometimes groups of individuals. In their fascinating back and forth lies the foundation of today’s debates about multi-level selection theory.

Responding to Stephen Jay Gould’s criticisms of his then most infamous book, Richard Dawkins writes in a footnote to the 1989 edition of The Selfish Gene, “I find his reasoning wrong but interesting, which, incidentally, he has been kind enough to tell me, is how he usually finds mine” (275). Dawkins’s idea was that evolution is, at its core, competition between genes with success measured in continued existence. Genes are replicators. Evolution is therefore best thought of as the outcome of this competition between replicators to keep on replicating. Gould’s response was that natural selection can’t possibly act on genes because genes are always buried in bodies. Those replicators always come grouped with other replicators and have only indirect effects on the bodies they ultimately serve as blueprints for. Natural selection, as Gould suggests, can’t “see” genes; it can only see, and act on, individuals.

The image of individual genes, plotting the course of their own survival, bears little relationship to developmental genetics as we understand it. Dawkins will need another metaphor: genes caucusing, forming alliances, showing deference for a chance to join a pact, gauging probable environments. But when you amalgamate so many genes and tie them together in hierarchical chains of action mediated by environments, we call the resultant object a body. (91)

Dawkins’ rebuttal, in both later editions of The Selfish Gene and in The Extended Phenotype, is, essentially, Duh—of course genes come grouped together with other genes and only ever evolve in context. But the important point is that individuals never replicate themselves. Bodies don’t create copies of themselves. Genes, on the other hand, do just that. Bodies are therefore best thought of as vehicles for these replicators.

            As a subtle hint of his preeminent critic’s unreason, Dawkins quotes himself in his response to Gould, citing a passage Gould must’ve missed, in which the genes making up an individual organism’s genome are compared to the members of a rowing team. Each contributes to the success or failure of the team, but it’s still the individual members that are important. Dawkins describes how the concept of an “Evolutionarily Stable Strategy,” can be applied to a matter

arising from the analogy of oarsmen in a boat (representing genes in a body) needing a good team spirit. Genes are selected, not as “good” in isolation, but as good at working against the background of the other genes in the gene pool. A good gene must be compatible with and complementary to, the other genes with whom it has to share a long succession of bodies. A gene for plant-grinding teeth is a good gene in the gene pool of a herbivorous species, but a bad gene in the gene pool of a carnivorous species. (84)

Gould, in other words, isn’t telling Dawkins anything he hasn’t already considered. But does that mean Gould’s point is moot? Or does the rowing team analogy actually support his reasoning? In any case, they both agree that the idea of a “good gene” is meaningless without context.

            The selfish gene idea has gone on to become the linchpin of research in many subfields of evolutionary biology, its main appeal being the ease with which it lends itself to mathematical modeling. If you want to know what traits are the most likely to evolve, you create a simulation in which individuals with various traits compete. Run the simulation and the outcome allows you to determine the relative probability of a given trait evolving in the context of individuals with other traits. You can then compare the statistical outcomes derived from the simulation with experimental data on how the actual animals behave. This sort of analysis relies on the assumption that the traits in question are both discrete and can be selected for, and this reasoning usually rest on the further assumption that the traits are, beyond a certain threshold probability, the end-product of chemical processes set in motion by a particular gene or set of genes. In reality, everyone acknowledges that this one-to-one correspondence between gene and trait—or constellation of genes and trait—seldom occurs. All genes can do is make their associated traits more likely to develop in specific environments. But if the sample size is large enough, meaning that the population you’re modeling is large enough, and if the interactions go through enough iterations, the complicating nuances will cancel out in the final statistical averaging.  

            Gould’s longstanding objection to this line of research—as productive as he acknowledged it could be—was that processes, and even events, like large-scale natural catastrophes, that occur at higher levels of analysis can be just as or more important than the shuffling of gene frequencies at the lowest level. It’s hardly irrelevant that Dawkins and most of his fellow ethologists who rely on his theories primarily study insects—relatively simple-bodied species that produce huge populations and have rapid generational turnover. Gould, on the other hand, focused his research on the evolution of snail shells. And he kept his eye throughout his career on the big picture of how evolution worked over vast periods of time. As a paleontologist, he found himself looking at trends in the fossil record that didn’t seem to follow the expected patterns of continual, gradual development within species. In fact, the fossil records of most lineages seem to be characterized by long periods of slow or no change followed by sudden disruptions—a pattern he and Niles Eldredge refer to as punctuated equilibrium. In working out an explanation for this pattern, Eldredge and Gould did Dawkins one better: sure, genes are capable of a sort of immortality, they reasoned, but then so are species. Evolution then isn’t just driven by competition between genes or individuals; something like species selection must also be taking place.

            Dawkins accepted this reasoning up to a point, seeing that it probably even goes some way toward explaining the patterns that often emerge in the fossil record. But whereas Gould believed there was so much randomness at play in large populations that small differences would tend to cancel out, and that “speciation events”—periods when displacement or catastrophe led to smaller group sizes—were necessary for variations to take hold in the population, Dawkins thought it unlikely that variations really do cancel each other out even in large groups. This is because he knows of several examples of “evolutionary arms races,” multigenerational exchanges in which a small change leads to a big advantage, which in turn leads to a ratcheting up of the trait in question as all the individuals in the population are now competing in a changed context. Sexual selection, based on competition for reproductive access to females, is a common cause of arms races. That’s why extreme traits in the form of plumage or body size or antlers are easy to point to. Once you allow for this type of change within populations, you are forced to conclude that gene-level selection is much more powerful and important than species-level selection. As Dawkins explains in The Extended Phenotype,

Accepting Eldredge and Gould’s belief that natural selection is a general theory that can be phrased on many levels, the putting together of a certain quantity of evolutionary change demands a certain minimum number of selective replicator-eliminations. Whether the replicators that are selectively eliminated are genes or species, a simple evolutionary change requires only a few replicator substitutions. A large number of replicator substitutions, however, are needed for the evolution of a complex adaptation. The minimum replacement cycle time when we consider the gene as replicator is one individual generation, from zygote to zygote. It is measured in years or months, or smaller time units. Even in the largest organisms it is measured in only tens of years. When we consider the species as replicator, on the other hand, the replacement cycle time is the interval from speciation event to speciation event, and may be measured in thousands of years, tens of thousands, hundreds of thousands. In any given period of geological time, the number of selective species extinctions that can have taken place is many orders of magnitude less than the number of selective allele replacements that can have taken place. (106)

This reasoning, however, applies only to features and traits that are under intense selection pressure. So in determining whether a given trait arose through a process of gene selection or species selection you would first have to know certain features about the nature of that trait: how much of an advantage it confers if any, how widely members of the population vary in terms of it, and what types of countervailing forces might cancel out or intensify the selection pressure.

            The main difference between Dawkins’s and Gould’s approaches to evolutionary questions is that Dawkins prefers to frame answers in terms of the relative success of competing genes while Gould prefers to frame them in terms of historical outcomes. Dawkins would explain a wasp’s behavior by pointing out that behaving that way ensures copies of the wasp’s genes will persist in the population. Gould would explain the shape of some mammalian skull by pointing out how contingent that shape is on the skulls of earlier creatures in the lineage. Dawkins knows history is important. Gould knows gene competition is important. The difference is in the relative weights given to each. Dawkins might challenge Gould, “Gene selection explains self-sacrifice for the sake of close relatives, who carry many of the same genes”—an idea known as kin selection—“what does your historical approach say about that?” Gould might then point to the tiny forelimbs of a tyrannosaurus, or the original emergence of feathers (which were probably sported by some other dinosaur) and challenge Dawkins, “Account for that in terms of gene competition.”

            The area where these different perspectives came into the most direct conflict was sociobiology, which later developed into evolutionary psychology. This is a field in which theorists steeped in selfish gene thinking look at human social behavior and see in it the end product of gene competition. Behaviors are treated as traits, traits are assumed to have a genetic basis, and, since the genes involved exist because they outcompeted other genes producing other traits, their continuing existence suggests that the traits are adaptive, i.e. that they somehow make the continued existence of the associated genes more likely. The task of the evolutionary psychologist is to work out how. This was in fact the approach ethologists had been applying, primarily to insects, for decades.

E.O. Wilson, a renowned specialist on ant behavior, was the first to apply it to humans in his book Sociobiology, and in a later book, On Human Nature, which won him the Pulitzer. But the assumption that human behavior is somehow fixed to genes and that it always serves to benefit those genes was anathema to Gould. If ever there were a creature for whom the causal chain from gene to trait or behavior was too long and complex for the standard ethological approaches to yield valid insights, it had to be humans.

Gould famously compared evolutionary psychological theories to the “Just-so” stories of Kipling, suggesting they relied on far too many shaky assumptions and made use of far too little evidence. From Gould’s perspective, any observable trait, in humans or any other species, was just as likely to have no effect on fitness at all as it was to be adaptive. For one thing, the trait could be a byproduct of some other trait that’s adaptive; it could have been selected for indirectly. Or it could emerge from essentially random fluctuations in gene frequencies that take hold in populations because they neither help nor hinder survival and reproduction. And in humans of course there are things like cultural traditions, forethought, and technological intervention (as when a gene for near-sightedness is rendered moot with contact lenses). The debate got personal and heated, but in the end evolutionary psychology survived Gould’s criticisms. Outsiders could even be forgiven for suspecting that Gould actually helped the field by highlighting some of its weaknesses. He, in fact, didn’t object in principle to the study of human behavior from the perspective of biological evolution; he just believed the earliest attempts were far too facile. Still, there are grudges being harbored to this day.

            Another way to look at the debate between Dawkins and Gould, one which lies at the heart of the current debate over group selection, is that Dawkins favored reductionism while Gould preferred holism. Dawkins always wants to get down to the most basic unit. His “‘central theorem’ of the extended phenotype” is that “An animal’s behaviour tends to maximize the survival of genes ‘for’ that behaviour, whether or not those genes happen to be in the body of the particular animal performing it” (233). Reductionism, despite its bad name, is an extremely successful approach to arriving at explanations, and it has a central role in science. Gould’s holistic approach, while more inclusive, is harder to quantify and harder to model. But there are several analogues to natural selection that suggest ways in which higher-order processes might be important for changes at lower orders. Regular interactions between bodies—or even between groups or populations of bodies—may be crucial in accounting for changes in gene frequencies the same way software can impact the functioning of hardware or symbolic thoughts can determine patterns of neural connections.

            The question becomes whether or not higher-level processes operate regularly enough that their effects can’t safely be assumed to average out over time. One pitfall of selfish gene thinking is that it lends itself to the conflation of definitions and explanations. Evolution can be defined as changes in gene frequencies. But assuming a priori that competition at the level of genes causes those changes means running the risk of overlooking measurable outcomes of processes at higher levels. The debate, then, isn’t over whether evolution occurs at the level of genes—it has to—but rather over what processes lead to the changes. It could be argued that Gould, in his magnum opus The Structure of Evolutionary Theory, which was finished shortly before his death, forced Dawkins into making just this mistake. Responding to the book in an essay in his own book A Devil’s Chaplain, Dawkins writes,

Gould saw natural selection as operating on many levels in the hierarchy of life. Indeed it may, after a fashion, but I believe that such selection can have evolutionary consequences only when the entities selected consist of “replicators.” A replicator is a unit of coded information, of high fidelity but occasionally mutable, with some causal power over its own fate. Genes are such entities… Biological natural selection, at whatever level we may see it, results in evolutionary effects only insofar as it gives rise to changes in gene frequencies in gene pools. Gould, however, saw genes only as “book-keepers,” passively tracking the changes going on at other levels. In my view, whatever else genes are, they must be more than book-keepers, otherwise natural selection cannot work. If a genetic change has no causal influence on bodies, or at least on something that natural selection can “see,” natural selection cannot favour or disfavour it. No evolutionary change will result. (221-222)

Thus we come full circle as Dawkins comes dangerously close to acknowledging Gould’s original point about the selfish gene idea. With the book-keeper metaphor, Gould wasn’t suggesting that genes are perfectly inert. Of course, they cause something—but they don’t cause natural selection. Genes build bodies and influence behaviors, but natural selection acts on bodies and behaviors. Genes are the passive book-keepers with regard to the effects of natural selection, even though they’re active agents with regard to bodies. Again, the question becomes, do the processes that happen at higher levels of analysis operate with enough regularity to produce measurable changes in gene frequencies that a strict gene-level analysis would miss or obscure? Yes, evolution is genetic change. But the task of evolutionary biologists is to understand how those changes come about.

            Gould died in May of 2002, in the middle of a correspondence he had been carrying on with Dawkins regarding how best to deal with an emerging creationist propaganda campaign called intelligent design, a set of ideas they both agreed were contemptible nonsense. These men were in many ways the opposing generals of the so-called Darwin Wars in the 1990s, but, as exasperated as they clearly got with each other’s writing at times, they always seemed genuinely interested and amused with what the other had to say. In his essay on Gould’s final work, Dawkins writes,

The Structure of Evolutionary Theory is such a massively powerful last word, it will keep us all busy replying to it for years. What a brilliant way for a scholar to go. I shall miss him. (222)

[I’ve narrowed the scope of this post to make the ideas as manageable as possible. This account of the debate leaves out many important names and is by no means comprehensive. A good first step if you’re interested in Dawkins’s and Gould’s ideas is to read The Selfish Gene and Full House.]  

Read Part 2:

A CRASH COURSE IN MULTILEVEL SELECTION THEORY PART 2: STEVEN PINKER FALLS PREY TO THE AVERAGING FALLACY SOBER AND WILSON TRIED TO WARN HIM ABOUT

And Part 3:

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

Read More
Dennis Junk Dennis Junk

The Storytelling Animal: a Light Read with Weighty Implications

The Storytelling Animal is not groundbreaking. But the style of the book contributes something both surprising and important. Gottschall could simply tell his readers that stories almost invariably feature come kind of conflict or trouble and then present evidence to support the assertion. Instead, he takes us on a tour from children’s highly gendered, highly trouble-laden play scenarios, through an examination of the most common themes enacted in dreams, through some thought experiments on how intensely boring so-called hyperrealism, or the rendering of real life as it actually occurs, in fiction would be. The effect is that we actually feel how odd it is to devote so much of our lives to obsessing over anxiety-inducing fantasies fraught with looming catastrophe.

A review of Jonathan Gottschall's The Storytelling Animal: How Stories Make Us Human

Vivian Paley, like many other preschool and kindergarten teachers in the 1970s, was disturbed by how her young charges always separated themselves by gender at playtime. She was further disturbed by how closely the play of each gender group hewed to the old stereotypes about girls and boys. Unlike most other teachers, though, Paley tried to do something about it. Her 1984 book Boys and Girls: Superheroes in the Doll Corner demonstrates in microcosm how quixotic social reforms inspired by the assumption that all behaviors are shaped solely by upbringing and culture can be. Eventually, Paley realized that it wasn’t the children who needed to learn new ways of thinking and behaving, but herself. What happened in her classrooms in the late 70s, developmental psychologists have reliably determined, is the same thing that happens when you put kids together anywhere in the world. As Jonathan Gottschall explains,

Dozens of studies across five decades and a multitude of cultures have found essentially what Paley found in her Midwestern classroom: boys and girls spontaneously segregate themselves by sex; boys engage in more rough-and-tumble play; fantasy play is more frequent in girls, more sophisticated, and more focused on pretend parenting; boys are generally more aggressive and less nurturing than girls, with the differences being present and measurable by the seventeenth month of life. (39)

Paley’s study is one of several you probably wouldn’t expect to find discussed in a book about our human fascination with storytelling. But, as Gottschall makes clear in The Storytelling Animal: How Stories Make Us Human, there really aren’t many areas of human existence that aren’t relevant to a discussion of the role stories play in our lives. Those rowdy boys in Paley’s classes were playing recognizable characters from current action and sci-fi movies, and the fantasies of the girls were right out of Grimm’s fairy tales (it’s easy to see why people might assume these cultural staples were to blame for the sex differences). And the play itself was structured around one of the key ingredients—really the key ingredient—of any compelling story, trouble, whether in the form of invading pirates or people trying to poison babies.

The Storytelling Animal is the book to start with if you have yet to cut your teeth on any of the other recent efforts to bring the study of narrative into the realm of cognitive and evolutionary psychology. Gottschall covers many of the central themes of this burgeoning field without getting into the weedier territories of game theory or selection at multiple levels. While readers accustomed to more technical works may balk at wading through all the author’s anecdotes about his daughters, Gottschall’s keen sense of measure and the light touch of his prose keep the book from getting bogged down in frivolousness. This applies as well to the sections in which he succumbs to the temptation any writer faces when trying to explain one or another aspect of storytelling by making a few forays into penning abortive, experimental plots of his own.

None of the central theses of The Storytelling Animal is groundbreaking. But the style and layout of the book contribute something both surprising and important. Gottschall could simply tell his readers that stories almost invariably feature come kind of conflict or trouble and then present evidence to support the assertion, the way most science books do. Instead, he takes us on a tour from children’s highly gendered, highly trouble-laden play scenarios, through an examination of the most common themes enacted in dreams—which contra Freud are seldom centered on wish-fulfillment—through some thought experiments on how intensely boring so-called hyperrealism, or the rendering of real life as it actually occurs, in fiction would be (or actually is, if you’ve read any of D.F.Wallace’s last novel about an IRS clerk). The effect is that instead of simply having a new idea to toss around we actually feel how odd it is to devote so much of our lives to obsessing over anxiety-inducing fantasies fraught with looming catastrophe. And we appreciate just how integral story is to almost everything we do.

This gloss of Gottschall’s approach gives a sense of what is truly original about The Storytelling Animal—it doesn’t seal off narrative as discrete from other features of human existence but rather shows how stories permeate every aspect of our lives, from our dreams to our plans for the future, even our sense of our own identity. In a chapter titled “Life Stories,” Gottschall writes,

This need to see ourselves as the striving heroes of our own epics warps our sense of self. After all, it’s not easy to be a plausible protagonist. Fiction protagonists tend to be young, attractive, smart, and brave—all of the things that most of us aren’t. Fiction protagonists usually live interesting lives that are marked by intense conflict and drama. We don’t. Average Americans work retail or cubicle jobs and spend their nights watching protagonists do interesting things on television, while they eat pork rinds dipped in Miracle Whip. (171)

If you find this observation a tad unsettling, imagine it situated on a page underneath a mug shot of John Wayne Gacy with a caption explaining how he thought of himself “more as a victim than as a perpetrator.” For the most part, though, stories follow an easily identifiable moral logic, which Gottschall demonstrates with a short plot of his own based on the hypothetical situations Jonathan Haidt designed to induce moral dumbfounding. This almost inviolable moral underpinning of narratives suggests to Gottschall that one of the functions of stories is to encourage a sense of shared values and concern for the wider community, a role similar to the one D.S. Wilson sees religion as having played, and continuing to play in human evolution.

Though Gottschall stays away from the inside baseball stuff for the most part, he does come down firmly on one issue in opposition to at least one of the leading lights of the field. Gottschall imagines a future “exodus” from the real world into virtual story realms that are much closer to the holodecks of Star Trek than to current World of Warcraft interfaces. The assumption here is that people’s emotional involvement with stories results from audience members imagining themselves to be the protagonist. But interactive videogames are probably much closer to actual wish-fulfillment than the more passive approaches to attending to a story—hence the god-like powers and grandiose speechifying.

William Flesch challenges the identification theory in his own (much more technical) book Comeuppance. He points out that films that have experimented with a first-person approach to camera work failed to capture audiences (think of the complicated contraption that filmed Will Smith’s face as he was running from the zombies in I am Legend). Flesch writes, “If I imagined I were a character, I could not see her face; thus seeing her face means I must have a perspective on her that prevents perfect (naïve) identification” (16). One of the ways we sympathize with one another, though, is to mirror them—to feel, at least to some degree, their pain. That makes the issue a complicated one. Flesch believes our emotional involvement comes not from identification but from a desire to see virtuous characters come through the troubles of the plot unharmed, vindicated, maybe even rewarded. Attending to a story therefore entails tracking characters' interactions to see if they are in fact virtuous, then hoping desperately to see their virtue rewarded.

Gottschall does his best to avoid dismissing the typical obsessive Larper (live-action role player) as the “stereotypical Dungeons and Dragons player” who “is a pimply, introverted boy who isn’t cool and can’t play sports or attract girls” (190). And he does his best to end his book on an optimistic note. But the exodus he writes about may be an example of another phenomenon he discusses. First the optimism:

Humans evolved to crave story. This craving has, on the whole, been a good thing for us. Stories give us pleasure and instruction. They simulate worlds so we can live better in this one. They help bind us into communities and define us as cultures. Stories have been a great boon to our species. (197)

But he then makes an analogy with food cravings, which likewise evolved to serve a beneficial function yet in the modern world are wreaking havoc with our health. Just as there is junk food, so there is such a thing as “junk story,” possibly leading to what Brian Boyd, another luminary in evolutionary criticism, calls a “mental diabetes epidemic” (198). In the context of America’s current education woes, and with how easy it is to conjure images of glazy-eyed zombie students, the idea that video games and shows like Jersey Shore are “the story equivalent of deep-fried Twinkies” (197) makes an unnerving amount of sense.

Here, as in the section on how our personal histories are more fictionalized rewritings than accurate recordings, Gottschall manages to achieve something the playful tone and off-handed experimentation don't prepare you for. The surprising accomplishment of this unassuming little book (200 pages) is that it never stops being a light read even as it takes on discoveries with extremely weighty implications. The temptation to eat deep-fried Twinkies is only going to get more powerful as story-delivery systems become more technologically advanced. Might we have already begun the zombie apocalypse without anyone noticing—and, if so, are there already heroes working to save us we won’t recognize until long after the struggle has ended and we’ve begun weaving its history into a workable narrative, a legend?

Also read:

WHAT IS A STORY? AND WHAT ARE YOU SUPPOSED TO DO WITH ONE?

And:

HOW TO GET KIDS TO READ LITERATURE WITHOUT MAKING THEM HATE IT

Read More
Dennis Junk Dennis Junk

The Enlightened Hypocrisy of Jonathan Haidt's Righteous Mind

Jonathan Haidt extends an olive branch to conservatives by acknowledging their morality has more dimensions than the morality of liberals. But is he mistaking what’s intuitive for what’s right? A critical, yet admiring review of The Righteous Mind.

A Review of Jonathan Haidt's new book,

The Righteous Mind: Why Good People are Divided by Politics and Religion

Back in the early 1950s, Muzafer Sherif and his colleagues conducted a now-infamous experiment that validated the central premise of Lord of the Flies. Two groups of 12-year-old boys were brought to a camp called Robber’s Cave in southern Oklahoma where they were observed by researchers as the members got to know each other. Each group, unaware at first of the other’s presence at the camp, spontaneously formed a hierarchy, and they each came up with a name for themselves, the Eagles and the Rattlers. That was the first stage of the study. In the second stage, the two groups were gradually made aware of each other’s presence, and then they were pitted against each other in several games like baseball and tug-o-war. The goal was to find out if animosity would emerge between the groups. This phase of the study had to be brought to an end after the groups began staging armed raids on each other’s territory, wielding socks they’d filled with rocks. Prepubescent boys, this and several other studies confirm, tend to be highly tribal.

            So do conservatives.

           This is what University of Virginia psychologist Jonathan Haidt heroically avoids saying explicitly for the entirety of his new 318-page, heavily endnoted The Righteous Mind: Why Good People Are Divided by Politics and Religion. In the first of three parts, he takes on ethicists like John Stuart Mill and Immanuel Kant, along with the so-called New Atheists like Sam Harris and Richard Dawkins, because, as he says in a characteristically self-undermining pronouncement, “Anyone who values truth should stop worshipping reason” (89). Intuition, Haidt insists, is more worthy of focus. In part two, he lays out evidence from his own research showing that all over the world judgments about behaviors rely on a total of six intuitive dimensions, all of which served some ancestral, adaptive function. Conservatives live in “moral matrices” that incorporate all six, while liberal morality rests disproportionally on just three. At times, Haidt intimates that more dimensions is better, but then he explicitly disavows that position. He is, after all, a liberal himself. In part three, he covers some of the most fascinating research to emerge from the field of human evolutionary anthropology over the past decade and a half, concluding that tribalism emerged from group selection and that without it humans never would have become, well, human. Again, the point is that tribal morality—i.e. conservatism—cannot be all bad.

One of Haidt’s goals in writing The Righteous Mind, though, was to improve understanding on each side of the central political divide by exploring, and even encouraging an appreciation for, the moral psychology of those on the rival side. Tribalism can’t be all bad—and yet we need much less of it in the form of partisanship. “My hope,” Haidt writes in the introduction, “is that this book will make conversations about morality, politics, and religion more common, more civil, and more fun, even in mixed company” (xii). Later he identifies the crux of his challenge, “Empathy is an antidote to righteousness, although it’s very difficult to empathize across a moral divide” (49). There are plenty of books by conservative authors which gleefully point out the contradictions and errors in the thinking of naïve liberals, and there are plenty by liberals returning the favor. What Haidt attempts is a willful disregard of his own politics for the sake of transcending the entrenched divisions, even as he’s covering some key evidence that forms the basis of his beliefs. Not surprisingly, he gives the impression at several points throughout the book that he’s either withholding the conclusions he really draws from the research or exercising great discipline in directing his conclusions along paths amenable to his agenda of bringing about greater civility.

Haidt’s focus is on intuition, so he faces the same challenge Daniel Kahneman did in writing Thinking, Fast and Slow: how to convey all these different theories and findings in a book people will enjoy reading from first page to last? Kahneman’s attempt was unsuccessful, but his encyclopedic book is still readable because its topic is so compelling. Haidt’s approach is to discuss the science in the context of his own story of intellectual development. The product reads like a postmodern hero’s journey in which the unreliable narrator returns right back to where he started, but with a heightened awareness of how small his neighborhood really is. It’s a riveting trip down the rabbit hole of self-reflection where the distinction between is and ought gets blurred and erased and reinstated, as do the distinctions between intuition and reason, and even self and other. Since, as Haidt reports, liberals tend to score higher on the personality trait called openness to new ideas and experiences, he seems to have decided on a strategy of uncritically adopting several points of conservative rhetoric—like suggesting liberals are out-of-touch with most normal people—in order to subtly encourage less open members of his audience to read all the way through. Who, after all, wants to read a book by a liberal scientist pointing out all the ways conservatives go wrong in their thinking?

The Elephant in the Room

Haidt’s first move is to challenge the primacy of thinking over intuiting. If you’ve ever debated someone into a corner, you know simply demolishing the reasons behind a position will pretty much never be enough to change anyone’s mind. Citing psychologist Tom Gilovich, Haidt explains that when we want to believe something, we ask ourselves, “Can I believe it?” We begin a search, “and if we find even a single piece of pseudo-evidence, we can stop thinking. We now have permission to believe. We have justification, in case anyone asks.” But if we don’t like the implications of, say, global warming, or the beneficial outcomes associated with free markets, we ask a different question: when we don’t want to believe something, we ask ourselves, “Must I believe it?” Then we search for contrary evidence, and if we find a single reason to doubt the claim, we can dismiss it. You only need one key to unlock the handcuffs of must. Psychologists now have file cabinets full of findings on “motivated reasoning,” showing the many tricks people use to reach the conclusions they want to reach. (84)

Haidt’s early research was designed to force people into making weak moral arguments so that he could explore the intuitive foundations of judgments of right and wrong. When presented with stories involving incest, or eating the family dog, which in every case were carefully worded to make it clear no harm would result to anyone—the incest couldn’t result in pregnancy; the dog was already dead—“subjects tried to invent victims” (24). It was clear that they wanted there to be a logical case based on somebody getting hurt so they could justify their intuitive answer that a wrong had been done.

They said things like ‘I know it’s wrong, but I just can’t think of a reason why.’ They seemed morally dumbfounded—rendered speechless by their inability to explain verbally what they knew intuitively. These subjects were reasoning. They were working quite hard reasoning. But it was not reasoning in search of truth; it was reasoning in support of their emotional reactions. (25)

Reading this section, you get the sense that people come to their beliefs about the world and how to behave in it by asking the same three questions they’d ask before deciding on a t-shirt: how does it feel, how much does it cost, and how does it make me look? Quoting political scientist Don Kinder, Haidt writes, “Political opinions function as ‘badges of social membership.’ They’re like the array of bumper stickers people put on their cars showing the political causes, universities, and sports teams they support” (86)—or like the skinny jeans showing everybody how hip you are.

Kahneman uses the metaphor of two systems to explain the workings of the mind. System 1, intuition, does most of the work most of the time. System 2 takes a lot more effort to engage and can never manage to operate independently of intuition. Kahneman therefore proposes educating your friends about the common intuitive mistakes—because you’ll never recognize them yourself. Haidt uses the metaphor of an intuitive elephant and a cerebrating rider. He first used this image for an earlier book on happiness, so the use of the GOP mascot was accidental. But because of the more intuitive nature of conservative beliefs it’s appropriate. Far from saying that republicans need to think more, though, Haidt emphasizes the point that rational thought is never really rational and never anything but self-interested. He argues,

the rider acts as the spokesman for the elephant, even though it doesn’t necessarily know what the elephant is really thinking. The rider is skilled at fabricating post hoc explanations for whatever the elephant has just done, and it is good at finding reasons to justify whatever the elephant wants to do next. Once human beings developed language and began to use it to gossip about each other, it became extremely valuable for elephants to carry around on their backs a full-time public relations firm. (46)

The futility of trying to avoid motivated reasoning provides Haidt some justification of his own to engage in what can only be called pandering. He cites cultural psychologists Joe Henrich, Steve Heine, and Ara Noenzayan, who argued in their 2010 paper “The Weirdest People in the World?”that researchers need to do more studies with culturally diverse subjects. Haidt commandeers the acronym WEIRD—western, educated, industrial, rich, and democratic—and applies it somewhat derisively for most of his book, even though it applies both to him and to his scientific endeavors. Of course, he can’t argue that what’s popular is necessarily better. But he manages to convey that attitude implicitly, even though he can’t really share the attitude himself.

Haidt is at his best when he’s synthesizing research findings into a holistic vision of human moral nature; he’s at his worst, his cringe-inducing worst, when he tries to be polemical. He succumbs to his most embarrassingly hypocritical impulses in what are transparently intended to be concessions to the religious and the conservative. WEIRD people are more apt to deny their intuitive, judgmental impulses—except where harm or oppression are involved—and insist on the fair application of governing principles derived from reasoned analysis. But apparently there’s something wrong with this approach:

Western philosophy has been worshipping reason and distrusting the passions for thousands of years. There’s a direct line running from Plato through Immanuel Kant to Lawrence Kohlberg. I’ll refer to this worshipful attitude throughout this book as the rationalist delusion. I call it a delusion because when a group of people make something sacred, the members of the cult lose the ability to think clearly about it. (28)

This is disingenuous. For one thing, he doesn’t refer to the rationalist delusion throughout the book; it only shows up one other time. Both instances implicate the New Atheists. Haidt coins the term rationalist delusion in response to Dawkins’s The God Delusion. An atheist himself, Haidt is throwing believers a bone. To make this concession, though, he’s forced to seriously muddle his argument. “I’m not saying,” he insists,

we should all stop reasoning and go with our gut feelings. Gut feelings are sometimes better guides than reasoning for making consumer choices and interpersonal judgments, but they are often disastrous as a basis for public policy, science, and law. Rather, what I’m saying is that we must be wary of any individual’s ability to reason. We should see each individual as being limited, like a neuron. (90)

As far as I know, neither Harris nor Dawkins has ever declared himself dictator of reason—nor, for that matter, did Mill or Rawls (Hitchens might have). Haidt, in his concessions, is guilty of making points against arguments that were never made. He goes on to make a point similar to Kahneman’s.

We should not expect individuals to produce good, open-minded, truth-seeking reasoning, particularly when self-interest or reputational concerns are in play. But if you put individuals together in the right way, such that some individuals can use their reasoning powers to disconfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system. (90)

What Haidt probably realizes but isn’t saying is that the environment he’s describing is a lot like scientific institutions in academia. In other words, if you hang out in them, you’ll be WEIRD.

A Taste for Self-Righteousness

The divide over morality can largely be reduced to the differences between the urban educated and the poor not-so-educated. As Haidt says of his research in South America, “I had flown five thousand miles south to search for moral variation when in fact there was more to be found a few blocks west of campus, in the poor neighborhood surrounding my university” (22). One of the major differences he and his research assistants serendipitously discovered was that educated people think it’s normal to discuss the underlying reasons for moral judgments while everyone else in the world—who isn’t WEIRD—thinks it’s odd:

But what I didn’t expect was that these working-class subjects would sometimes find my request for justifications so perplexing. Each time someone said that the people in a story had done something wrong, I asked, “Can you tell me why that was wrong?” When I had interviewed college students on the Penn campus a month earlier, this question brought forth their moral justifications quite smoothly. But a few blocks west, this same question often led to long pauses and disbelieving stares. Those pauses and stares seemed to say,

You mean you don’t know why it’s wrong to do that to a chicken? I have to explain it to you? What planet are you from? (95)

The Penn students “were unique in their unwavering devotion to the ‘harm principle,’” Mill’s dictum that laws are only justified when they prevent harm to citizens. Haidt quotes one of the students as saying, “It’s his chicken, he’s eating it, nobody is getting hurt” (96). (You don’t want to know what he did before cooking it.)

Having spent a little bit of time with working-class people, I can make a point that Haidt overlooks: they weren’t just looking at him as if he were an alien—they were judging him. In their minds, he was wrong just to ask the question. The really odd thing is that even though Haidt is the one asking the questions he seems at points throughout The Righteous Mind to agree that we shouldn’t ask questions like that:

There’s more to morality than harm and fairness. I’m going to try to convince you that this principle is true descriptively—that is, as a portrait of the moralities we see when we look around the world. I’ll set aside the question of whether any of these alternative moralities are really good, true, or justifiable. As an intuitionist, I believe it is a mistake to even raise that emotionally powerful question until we’ve calmed our elephants and cultivated some understanding of what such moralities are trying to accomplish. It’s just too easy for our riders to build a case against every morality, political party, and religion that we don’t like. So let’s try to understand moral diversity first, before we judge other moralities. (98)

But he’s already been busy judging people who base their morality on reason, taking them to task for worshipping it. And while he’s expending so much effort to hold back his own judgments he’s being judged by those whose rival conceptions he’s trying to understand. His open-mindedness and disciplined restraint are as quintessentially liberal as they are unilateral.

In the book’s first section, Haidt recounts his education and his early research into moral intuition. The second section is the story of how he developed his Moral Foundations Theory. It begins with his voyage to Bhubaneswar, the capital of Orissa in India. He went to conduct experiments similar to those he’d already been doing in the Americas. “But these experiments,” he writes, “taught me little in comparison to what I learned just from stumbling around the complex social web of a small Indian city and then talking with my hosts and advisors about my confusion.” It was an earlier account of this sojourn Haidt had written for the online salon The Edge that first piqued my interest in his work and his writing. In both, he talks about his two “incompatible identities.”

On one hand, I was a twenty-nine-year-old liberal atheist with very definite views about right and wrong. On the other hand, I wanted to be like those open-minded anthropologists I had read so much about and had studied with. (101)

The people he meets in India are similar in many ways to American conservatives. “I was immersed,” Haidt writes, “in a sex-segregated, hierarchically stratified, devoutly religious society, and I was committed to understanding it on its own terms, not on mine” (102). The conversion to what he calls pluralism doesn’t lead to any realignment of his politics. But supposedly for the first time he begins to feel and experience the appeal of other types of moral thinking. He could see why protecting physical purity might be fulfilling. This is part of what's known as the “ethic of divinity,” and it was missing from his earlier way of thinking. He also began to appreciate certain aspects of the social order, not to the point of advocating hierarchy or rigid sex roles but seeing value in the complex network of interdependence.

The story is thoroughly engrossing, so engrossing that you want it to build up into a life-changing insight that resolves the crisis. That’s where the six moral dimensions come in (though he begins with just five and only adds the last one much later), which he compares to the different dimensions of taste that make up our flavor palette. The two that everyone shares but that liberals give priority to whenever any two or more suggest different responses are Care and Harm—hurting people is wrong and we should help those in need—and Fairness. The other three from the original set are Loyalty, Authority, and Sanctity, loyalty to the tribe, respect for the hierarchy, and recognition of the sacredness of the tribe’s symbols, like the flag. Libertarians are closer to liberals; they just rely less on the Care dimension and much more on the recently added sixth one, Liberty from Opression, which Haidt believes evolved in the context of ancestral egalitarianism similar to that found among modern nomadic foragers. Haidt suggests that restricting yourself to one or two dimensions is like swearing off every flavor but sweet and salty, saying,

many authors reduce morality to a single principle, usually some variant of welfare maximization (basically, help people, don’t hurt them). Or sometimes it’s justice or related notions of fairness, rights, or respect for individuals and their autonomy. There’s The Utilitarian Grill, serving only sweeteners (welfare), and The Deontological Diner, serving only salts (rights). Those are your options. (113)

Haidt doesn’t make the connection between tribalism and the conservative moral trifecta explicit. And he insists he’s not relying on what’s called the Naturalistic Fallacy—reasoning that what’s natural must be right. Rather, he’s being, he claims, strictly descriptive and scientific.

Moral judgment is a kind of perception, and moral science should begin with a careful study of the moral taste receptors. You can’t possibly deduce the list of five taste receptors by pure reasoning, nor should you search for it in scripture. There’s nothing transcendental about them. You’ve got to examine tongues. (115)

But if he really were restricting himself to description he would have no beef with the utilitarian ethicists like Mill, the deontological ones like Kant, or for that matter with the New Atheists, all of whom are operating in the realm of how we should behave and what we should believe as opposed to how we’re naturally, intuitively primed to behave and believe. At one point, he goes so far as to present a case for Kant and Jeremy Bentham, father of utilitarianism, being autistic (the trendy psychological diagnosis du jour) (120). But, like a lawyer who throws out a damning but inadmissible comment only to say “withdrawn” when the defense objects, he assures us that he doesn’t mean the autism thing as an ad hominem.

From The Moral Foundations Website

I think most of my fellow liberals are going to think Haidt’s metaphor needs some adjusting. Humans evolved a craving for sweets because in our ancestral environment fruits were a rare but nutrient-rich delicacy. Likewise, our taste for salt used to be adaptive. But in the modern world our appetites for sugar and salt have created a health crisis. These taste receptors are also easy for industrial food manufacturers to exploit in a way that enriches them and harms us. As Haidt goes on to explain in the third section, our tribal intuitions were what allowed us to flourish as a species. But what he doesn’t realize or won’t openly admit is that in the modern world tribalism is dangerous and far too easily exploited by demagogues and PR experts.

In his story about his time in India, he makes it seem like a whole new world of experiences was opened to him. But this is absurd (and insulting). Liberals experience the sacred too; they just don’t attempt to legislate it. Liberals recognize intuitions pushing them toward dominance and submission. They have feelings of animosity toward outgroups and intense loyalty toward members of their ingroup. Sometimes, they even indulge these intuitions and impulses. The distinction is not that liberals don’t experience such feelings; they simply believe they should question whether acting on them is appropriate in the given context. Loyalty in a friendship or a marriage is moral and essential; loyalty in business, in the form of cronyism, is profoundly immoral. Liberals believe they shouldn’t apply their personal feelings about loyalty or sacredness to their judgments of others because it’s wrong to try to legislate your personal intuitions, or even the intuitions you share with a group whose beliefs may not be shared in other sectors of society. In fact, the need to consider diverse beliefs—the pluralism that Haidt extolls—is precisely the impetus behind the efforts ethicists make to pare down the list of moral considerations.

Moral intuitions, like food cravings, can be seen as temptations requiring discipline to resist. It’s probably no coincidence that the obesity epidemic tracks the moral divide Haidt found when he left the Penn campus. As I read Haidt’s account of Drew Westen’s fMRI experiments with political partisans, I got a bit anxious because I worried a scan might reveal me to be something other than what I consider myself. The machine in this case is a bit like the Sorting Hat at Hogwarts, and I hoped, like Harry Potter, not to be placed in Slytherin. But this hope, even if it stems from my wish to identify with the group of liberals I admire and feel loyalty toward, cannot be as meaningless as Haidt’s “intuitionism” posits.

Ultimately, the findings Haidt brings together under the rubric of Moral Foundations Theory don’t lend themselves in any way to his larger program of bringing about greater understanding and greater civility. He fails to understand that liberals appreciate all the moral dimensions but don’t think they should all be seen as guides to political policies. And while he may want there to be less tribalism in politics he has to realize that most conservatives believe tribalism is politics—and should be.

Resistance to the Hive Switch is Futile

“We are not saints,” Haidt writes in the third section, “but we are sometimes good team players” (191). Though his efforts to use Moral Foundations to understand and appreciate conservatives lead to some bizarre contortions and a profound misunderstanding of liberals, his synthesis of research on moral intuitions with research and theorizing on multi-level selection, including selection at the level of the group, is an important contribution to psychology and anthropology. He writes that

anytime a group finds a way to suppress selfishness, it changes the balance of forces in a multi-level analysis: individual-level selection becomes less important, and group-level selection becomes more powerful. For example, if there is a genetic basis for feelings of loyalty and sanctity (i.e., the Loyalty and Sanctity Foundations), then intense intergroup competition will make these genes become more common in the next generation. (194)

The most interesting idea in this section is that humans possess what Haidt calls a “hive switch” that gets flipped whenever we engage in coordinated group activities. He cites historian William McNeil who recalls an “altered state of consciousness” when he was marching in formation with fellow soldiers in his army days. He describes it as a “sense of pervasive well-being…a strange sense of personal enlargement; a sort of swelling out, becoming bigger than life” (221). Sociologist Emile Durkheim referred to this same experience as “collective effervescence.” People feel it today at football games, at concerts as they dance to a unifying beat, and during religious rituals. It’s a profoundly spiritual experience, and it likely evolved to create a greater sense of social cohesion within groups competing with other groups.

Surprisingly, the altruism inspired by this sense of the sacred triggered by coordinated activity, though primarily directed at fellow group members—parochial altruism—can also flow out in ways that aren’t entirely tribal.

Haidt cites political scientists Robert Putnam and David Campbell’s book, American Grace: How Religion Divides and Unites Us, where they report the finding that “the more frequently people attend religious services, the more generous and charitable they become across the board” (267); they do give more to religious charities, but they also give more to secular ones. Putnam and Campbell write that “religiously observant Americans are better neighbors and better citizens.” The really astonishing finding from Putnam and Campbell’s research, though, is that the social advantages enjoyed by religious people had nothing to do with the actual religious beliefs. Haidt explains,

These beliefs and practices turned out to matter very little. Whether you believe in hell, whether you pray daily, whether you are a Catholic, Protestant, Jew, or Mormon… none of these things correlated with generosity. The only thing that was reliably and powerfully associated with the moral benefits of religion was how enmeshed people were in relationships with their co-religionists. It’s the friendships and group activities, carried out within a moral matrix that emphasizes selflessness. That’s what brings out the best in people. (267)

The Sacred foundation, then, is an integral aspect of our sense of community, as well as a powerful inspiration for altruism. Haidt cites the work of Richard Sosis, who combed through all the records he could find on communes in America. His central finding is that “just 6 percent of the secular communes were still functioning twenty years after their founding, compared to 39 percent of the religious communes.” Socis went on to identify “one master variable” which accounted for the difference between success and failure for religious groups: “the number of costly sacrifices that each commune demanded from its members” (257). But sacrifices demanded by secular groups made no difference whatsoever. Haidt concludes,

In other words, the very ritual practices that the New Atheists dismiss as costly, inefficient, and irrational turn out to be a solution to one of the hardest problems humans face: cooperation without kinship. Irrational beliefs can sometimes help the group function more rationally, particularly when those beliefs rest upon the Sanctity foundation. Sacredness binds people together, and then blinds them to the arbitrariness of the practice. (257)

This section captures the best and the worst of Haidt's work. The idea that humans have an evolved sense of the sacred, and that it came about to help our ancestral groups cooperate and cohere—that’s a brilliant contribution to theories going back through D.S.Wilson, Emile Durkheim, all the way back to Darwin. Contemplating it sparks a sense of wonder that must emerge from that same evolved feeling for the sacred. But then he uses the insight in the service of a really lame argument.

The costs critics of religion point to aren’t the minor personal ones like giving up alcohol or fasting for a few days. Haidt compares studying the actual, “arbitrary” beliefs and practices of religious communities to observing the movements of a football for the purpose of trying to understand why people love watching games. It’s the coming together as a group, he suggests, the sharing of goals and mutual direction of attention, the feeling of shared triumph or even disappointment. But if the beliefs and rituals aren’t what’s important then there’s no reason they have to be arbitrary—and there’s no reason they should have to entail any degree of hostility toward outsiders. How then can Haidt condemn Harris and Dawkins for “worshipping reason” and celebrating the collective endeavor known as science? Why doesn’t he recognize that for highly educated people, especially scientists, discovery is sacred? He seriously mars his otherwise magnificent work by wrongly assuming anyone who doesn’t think flushing an American flag down the toilet is wrong has no sense of the sacred, shaking his finger at them, effectively saying, rallying around a cause is what being human is all about, but what you flag-flushers think is important just isn’t worthy—even though it’s exactly what I think is important too, what I’ve devoted my career and this book you're holding to anyway.

As Kahneman stresses in his book, resisting the pull of intuition takes a great deal of effort. The main difference between highly educated people and everyone else isn’t a matter of separate moral intuitions. It’s a different attitude toward intuitions in general. Those of us who worship reason believe in the Enlightenment ideals of scientific progress and universal human rights. I think most of us even feel those ideals are sacred and inviolable. But the Enlightenment is a victim of its own success. No one remembers the unchecked violence and injustice that were the norms before it came about—and still are the norms in many parts of the world. In some academic sectors, the Enlightenment is even blamed for some of the crimes its own principles are used to combat, like patriarchy and colonialism. Intuitions are still very much a part of human existence, even among those who are the most thoroughly steeped in Enlightenment values. But worshipping them is far more dangerous than worshipping reason. As the world becomes ever more complicated, nostalgia for simpler times becomes an ever more powerful temptation. And surmounting the pull of intuition may ultimately be an impossible goal. But it’s still a worthy, and even sacred ideal.

But if Haidt’s attempt to inspire understanding and appreciation misfires how are we to achieve the goal of greater civility and less partisanship? Haidt does offer some useful suggestions. Still, I worry that his injunction to “Talk to the elephant” will merely contribute to the growing sway of the burgeoning focus-groupocracy. Interestingly, the third stage of the Robber's Cave experiment may provide some guidance. Sherif and his colleagues did manage to curtail the escalating hostility between the Eagles and the Rattlers. And all it took was some shared goals they had to cooperate to achieve, like when their bus got stuck on the side of the road and all the boys in both groups had to work together to work it free. Maybe it’s time for a mission to Mars all Americans could support (credit Neil de Grasse Tyson). Unfortunately, the conservatives would probably never get behind it. Maybe we should do another of our liberal conspiracy hoaxes to convince them China is planning to build a military base on the Red Planet. Then we’ll be there in no time.

Also read

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

And:

THE SELF-RIGHTEOUSNESS INSTINCT: STEVEN PINKER ON THE BETTER ANGELS OF MODERNITY AND THE EVILS OF MORALITY

And:

WHY TAMSIN SHAW IMAGINES THE PSYCHOLOGISTS ARE TAKING POWER

Read More
Dennis Junk Dennis Junk

HUNGER GAME THEORY: Post-Apocalyptic Fiction and the Rebirth of Humanity

We can’t help feeling strong positive emotions toward altruists. Katniss wins over readers and viewers the moment she volunteers to serve as tribute in place of her younger sister, whose name was picked in the lottery. What’s interesting, though, is that at several points in the story Katniss actually does engage in purely rational strategizing. She doesn’t attempt to help Peeta for a long time after she finds out he’s been wounded trying to protect her—why would she when they’re only going to have to fight each other in later rounds?

The appeal of post-apocalyptic stories stems from the joy of experiencing anew the birth of humanity. The renaissance never occurs in M.T. Anderson’s Feed, in which the main character is rendered hopelessly complacent by the entertainment and advertising beamed directly into his brain. And it is that very complacency, the product of our modern civilization's unfathomable complexity, that most threatens our sense of our own humanity. There was likely a time, though, when small groups composed of members of our species were beset by outside groups composed of individuals of a different nature, a nature that when juxtaposed with ours left no doubt as to who the humans were. 

      In Suzanne Collins’ The Hunger Games, Katniss Everdeen reflects on how the life-or-death stakes of the contest she and her fellow “tributes” are made to participate in can transform teenage boys and girls into crazed killers. She’s been brought to a high-tech mega-city from District 12, a mining town as quaint as the so-called Capitol is futuristic. Peeta Mellark, who was chosen by lottery as the other half of the boy-girl pair of tributes from the district, has just said to her, “I want to die as myself…I don’t want them to change me in there. Turn me into some kind of monster that I’m not.” Peeta also wants “to show the Capitol they don’t own me. That I’m more than just a piece in their Games.” The idea startles Katniss, who at this point is thinking of nothing but surviving the games—knowing full well that there are twenty-two more tributes and only one will be allowed to leave the arena alive. Annoyed by Peeta’s pronouncement of a higher purpose, she thinks,

We will see how high and mighty he is when he’s faced with life and death. He’ll probably turn into one of those raging beast tributes, the kind who tries to eat someone’s heart after they’ve killed them. There was a guy like that a few years ago from District 6 called Titus. He went completely savage and the Gamemakers had to have him stunned with electric guns to collect the bodies of the players he’d killed before he ate them. There are no rules in the arena, but cannibalism doesn’t play well with the Capitol audience, so they tried to head it off. (141-3)

Cannibalism is the ultimate relinquishing of the mantle of humanity because it entails denying the humanity of those being hunted for food. It’s the most basic form of selfishness: I kill you so I can live.

The threat posed to humanity by hunger is also the main theme of Cormac McCarthy’s The Road, the story of a father and son wandering around the ruins of a collapsed civilization. The two routinely search abandoned houses for food and supplies, and in one they discover a bunch of people locked in a cellar. The gruesome clue to the mystery of why they’re being kept is that some have limbs amputated. The men keeping them are devouring the living bodies a piece at a time. After a harrowing escape, the boy, understandably disturbed, asks, “They’re going to kill those people, arent they?” His father, trying to protect him from the harsh reality, answers yes, but tries to be evasive, leading to this exchange:

Why do they have to do that?

I dont know.

Are they going to eat them?

I dont know.

They’re going to eat them, arent they?

Yes.

And we couldnt help them because then they’d eat us too.

Yes.

And that’s why we couldnt help them.

Yes.

Okay.

But of course it’s not okay. After they’ve put some more distance between them and the human abattoir, the boy starts to cry. His father presses him to explain what’s wrong:

Just tell me.

We wouldnt ever eat anybody, would we?

No. Of course not.

Even if we were starving?

We’re starving now.

You said we werent.

I said we werent dying. I didn’t say we werent starving.

But we wouldnt.

No. We wouldnt.

No matter what.

No. No matter what.

Because we’re the good guys.

Yes.

And we’re carrying the fire.

And we’re carrying the fire. Yes.

Okay. (127-9)

And this time it actually is okay because the boy, like Peeta Mellark, has made it clear that if the choice is between dying and becoming a monster he wants to die.

This preference for death over depredation of others is one of the hallmarks of humanity, and it poses a major difficulty for economists and evolutionary biologists alike. How could this type of selflessness possibly evolve?

John von Neumann, one of the founders of game theory, served an important role in developing the policies that have so far prevented the real life apocalypse from taking place. He is credited with the strategy of Mutually Assured Destruction, or MAD (he liked amusing acronyms), that prevailed during the Cold War. As the name implies, the goal was to assure the Soviets that if they attacked us everyone would die. Since the U.S. knew the same was true of any of our own plans to attack the Soviets, a tense peace, or Cold War, was the inevitable result. But von Neumann was not at all content with this peace. He devoted his twilight years to pushing for the development of Intercontinental Ballistic Missiles (ICBMs) that would allow the U.S. to bomb Russia without giving the Soviets a chance to respond. In 1950, he made the infamous remark that inspired Dr. Strangelove: “If you say why not bomb them tomorrow, I say, why not today. If you say today at five o’clock, I say why not one o’clock?”

           Von Neumann’s eagerness to hit the Russians first was based on the logic of game theory, and that same logic is at play in The Hunger Games and other post-apocalyptic fiction. The problem with cooperation, whether between rival nations or between individual competitors in a game of life-or-death, is that it requires trust—and once one player begins to trust the other he or see becomes vulnerable to exploitation, the proverbial stab in the back from the person who’s supposed to be watching it. Game theorists model this dynamic with a thought experiment called the Prisoner’s Dilemma. Imagine two criminals are captured and taken to separate interrogation rooms. Each criminal has the option of either cooperating with the other criminal by remaining silent or betraying him or her by confessing. Here’s a graph of the possible outcomes:

No matter what the other player does, each of them achieves a better outcome by confessing. Von Neumann saw the standoff between the U.S. and the Soviets as a Prisoner’s Dilemma; by not launching nukes, each side was cooperating with the other. Eventually, though, one of them had to realize that the only rational thing to do was be the first to defect.

But the way humans play games is a bit different. As it turned out, von Neumann was wrong about the game theory implications of the Cold War—neither side ever did pull the trigger; both prisoners kept their mouth shut. In Collins' novel, Katniss faces a Prisoner's Dilemma every time she encounters another tribute who may be willing to team up with her in the hunger game. The graph for her and Peeta looks like this:

In the context of the hunger games, then, it makes sense to team up with rivals as long as they have useful skills, knowledge, or strength. Each tribute knows, furthermore, that as long as he or she is useful to a teammate, it would be irrational for that teammate to defect.

The Prisoner’s Dilemma logic gets much more complicated when you start having players try to solve it over multiple rounds of play. Game theorists refer to each time a player has to make a choice as an iteration. And to model human cooperative behavior you have to not only have multiple iterations but also find a way to factor in each player’s awareness of how rivals have responded to the dilemma in the past. Humans have reputations. Katniss, for instance, doesn’t trust the Career tributes because they have a reputation for being ruthless. She even begins to suspect Peeta when she sees that he’s teamed up with the Careers. (His knowledge of Katniss is a resource to them, but he’s using that knowledge in an irrational way—to protect her instead of himself.) On the other hand, Katniss trusts Rue because she's young and dependent—and because she comes from an adjacent district not known for sending tributes who are cold-blooded.

When you have multiple iterations and reputations, you also open the door for punishments and rewards. At the most basic level, people reward those who they witness cooperating by being more willing to cooperate with them. As we read or watch The Hunger Games, we can actually experience the emotional shift that occurs in ourselves as we witness Katniss’s cooperative behavior.

People punish those who defect by being especially reluctant to trust them. At this point, the analysis is still within the realm of the purely selfish and rational. But you can’t stay in that realm for very long when you’re talking about the ways humans respond to one another.

            Each time Katniss encounters another tribute in the games she faces a Prisoner’s Dilemma. Until the final round, the hunger games are not a zero-sum contest—which means that a gain for one doesn’t necessarily mean a loss for the other. Ultimately, of course, Katniss and Peeta are playing a zero-sum game; since only one tribute can win, one of any two surviving players at the end will have to kill the other (or let him die). Every time one tribute kills another, the math of the Prisoner’s Dilemma has to be adjusted. Peeta, for instance, wouldn’t want to betray Katniss early on, while there are still several tributes trying to kill them, but he would want to balance the benefits of her resources with whatever advantage he could gain from her unsuspecting trust—so as they approach the last few tributes, his temptation to betray her gets stronger. Of course, Katniss knows this too, and so the same logic applies for her.

            As everyone who’s read the novel or seen the movie knows, however, this isn’t how either Peeta or Katniss plays in the hunger games. And we already have an idea of why that is: Peeta has said he doesn’t want to let the games turn him into a monster. Figuring out the calculus of the most rational decisions is well and good, but humans are often moved by their emotions—fear, affection, guilt, indebtedness, love, rage—to behave in ways that are completely irrational—at least in the near term. Peeta is in love with Katniss, and though she doesn’t really quite trust him at first, she proves willing to sacrifice herself in order to help him survive. This goes well beyond cooperation to serve purely selfish interests.

Many evolutionary theorists believe that at some point in our evolutionary history, humans began competing with each other to see who could be the most cooperative. This paradoxical idea emerges out of a type of interaction between and among individuals called costly signaling. Many social creatures must decide who among their conspecifics would make the best allies. And all sexually reproducing animals have to have some way to decide with whom to mate. Determining who would make the best ally or who would be the fittest mate is so important that only the most reliable signals are given any heed. What makes the signals reliable is their cost—only the fittest can afford to engage in costly signaling. Some animals have elaborate feathers that are conspicuous to predators; others have massive antlers. This is known as the handicap principle. In humans, the theory goes, altruism somehow emerged as a costly signal, so that the fittest demonstrate their fitness by engaging in behaviors that benefit others to their own detriment. The boy in The Road, for instance, isn’t just upset by the prospect of having to turn to canibalism himself; he’s sad that he and his father weren’t able to help the other people they found locked in the cellar.

We can’t help feeling strong positive emotions toward altruists. Katniss wins over readers and viewers the moment she volunteers to serve as tribute in place of her younger sister, whose name was picked in the lottery. What’s interesting, though, is that at several points in the story Katniss actually does engage in purely rational strategizing. She doesn’t attempt to help Peeta for a long time after she finds out he’s been wounded trying to protect her—why would she when they’re only going to have to fight each other in later rounds? But when it really comes down to it, when it really matters most, both Katniss and Peeta demonstrate that they’re willing to protect one another even at a cost to themselves.

The birth of humanity occurred, somewhat figuratively, when people refused to play the game of me versus you and determined instead to play us versus them. Humans don’t like zero-sum games, and whenever possible they try to change to the rules so there can be more than one winner. To do that, though, they have to make it clear that they would rather die than betray their teammates. In The Road, the father and his son continue to carry the fire, and in The Hunger Games Peeta gets his chance to show he’d rather die than be turned into a monster. By the end of the story, it’s really no surprise what Katniss choses to do either. Saving her sister may not have been purely altruistic from a genetic standpoint. But Peeta isn’t related to her, nor is he her only—or even her most eligible—suitor. Still, her moments of cold strategizing notwithstanding, we've had her picked as an altruist all along.

Of course, humanity may have begun with the sense that it’s us versus them, but as it’s matured the us has grown to encompass an ever wider assortment of people and the them has receded to include more and more circumscribed groups of evil-doers. Unfortunately, there are still all too many people who are overly eager to treat unfamiliar groups as rival tribes, and all too many people who believe that the best governing principle for society is competition—the war of all against all. Altruism is one of the main hallmarks of humanity, and yet some people are simply more altruistic than others. Let’s just hope that it doesn’t come down to us versus them…again.

Also read

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

And:

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

Read More
Dennis Junk Dennis Junk

The Adaptive Appeal of Bad Boys

From the intro to my master’s thesis where I explore the evolved psychological dynamics of storytelling and witnessing, with a special emphasis on the paradox that the most compelling characters are often less than perfect human beings. Why do audiences like Milton’s Satan, for instance? Why did we all fall in love with Tyler Durden from Fight Club? It turns out both of these characters give indications that they just may be more altruistic than they appear at first.

Excerpt from Hierarchies in Hell and Leaderless Fight ClubsAltruism, Narrative Interest, and the Adaptive Appeal of Bad Boys, my master’s thesis

            In a New York Times article published in the spring of 2010, psychologist Paul Bloom tells the story of a one-year-old boy’s remarkable response to a puppet show. The drama the puppets enacted began with a central character’s demonstration of a desire to play with a ball. After revealing that intention, the character roles the ball to a second character who likewise wants to play and so rolls the ball back to the first. When the first character rolls the ball to a third, however, this puppet snatches it up and quickly absconds. The second, nice puppet and the third, mean one are then placed before the boy, who’s been keenly attentive to their doings, and they both have placed before them a few treats. The boy is now instructed by one of the adults in the room to take a treat away from one of the puppets. Most children respond to the instructions by taking the treat away from the mean puppet, and this particular boy is no different. He’s not content with such a meager punishment, though, and after removing the treat he proceeds to reach out and smack the mean puppet on the head.

            Brief stage shows like the one featuring the nice and naughty puppets are part of an ongoing research program lead by Karen Wynn, Bloom’s wife and colleague, and graduate student Kiley Hamlin at Yale University’s Infant Cognition Center. An earlier permutation of the study was featured on PBS’s Nova series The Human Spark(jump to chapter 5), which shows host Alan Alda looking on as an infant named Jessica attends to a puppet show with the same script as the one that riled the boy Bloom describes. Jessica is so tiny that her ability to track and interpret the puppets’ behavior on any level is impressive, but when she demonstrates a rudimentary capacity for moral judgment by reaching with unchecked joy for the nice puppet while barely glancing at the mean one, Alda—and Nova viewers along with him—can’t help but demonstrate his own delight. Jessica shows unmistakable signs of positive emotion in response to the nice puppet’s behaviors, and Alda in turn feels positive emotions toward Jessica. Bloom attests that “if you watch the older babies during the experiments, they don’t act like impassive judges—they tend to smile and clap during good events and frown, shake their heads and look sad during the naughty events” (6). Any adult witnessing the children’s reactions can be counted on to mirror these expressions and to feel delight at the babies’ incredible precocity.

            The setup for these experiments with children is very similar to experiments with adult participants that assess responses to anonymously witnessed exchanges. In their research report, “Third-Party Punishment and Social Norms,” Ernst Fehr and Urs Fischbacher describe a scenario inspired by economic game theory called the Dictator Game. It begins with an experimenter giving a first participant, or player, a sum of money. The experimenter then explains to the first player that he or she is to propose a cut of the money to the second player. In the Dictator Game—as opposed to other similar game theory scenarios—the second player has no choice but to accept the cut from the first player, the dictator. The catch is that the exchange is being witnessed by a third party, the analogue of little Jessica or the head-slapping avenger in the Yale experiments.  This third player is then given the opportunity to reward or punish the dictator. As Fehr and Fischbacher explain, “Punishment is, however, costly for the third party so a selfish third party will never punish” (3).

It turns out, though, that adults, just like the infants in the Yale studies, are not selfish—at least not entirely. Instead, they readily engage in indirect, or strong, reciprocity. Evolutionary literary theorist William Flesch explains that “the strong reciprocator punishes and rewards others for their behavior toward any member of the social group, and not just or primarily for their interactions with the reciprocator” (21-2). According to Flesch, strong reciprocity is the key to solving what he calls “the puzzle of narrative interest,” the mystery of why humans so readily and eagerly feel “anxiety on behalf of and about the motives, actions, and experiences of fictional characters” (7). The human tendency toward strong reciprocity reaches beyond any third party witnessing an exchange between two others; as Alda, viewers of Nova, and even readers of Bloom’s article in the Times watch or read about Wynn and Hamlin’s experiments, they have no choice but to become participants in the experiments themselves, because their own tendency to reward good behavior with positive emotion and to punish bad behavior with negative emotion is automatically engaged. Audiences’ concern, however, is much less with the puppets’ behavior than with the infants’ responses to it.

The studies of social and moral development conducted at the Infant Cognition Center pull at people’s heartstrings because they demonstrate babies’ capacity to behave in a way that is expected of adults. If Jessica had failed to discern between the nice and the mean puppets, viewers probably would have readily forgiven her. When older people fail to make moral distinctions, however, those in a position to witness and appreciate that failure can be counted on to withdraw their favor—and may even engage in some type of sanctioning, beginning with unflattering gossip and becoming more severe if the immorality or moral complacency persists. Strong reciprocity opens the way for endlessly branching nth–order reciprocation, so not only will individuals be considered culpable for offenses they commit but also for offenses they passively witness. Flesch explains,

Among the kinds of behavior that we monitor through tracking or through report, and that we have a tendency to punish or reward, is the way others monitor behavior through tracking or through report, and the way they manifest a tendency to punish and reward. (50)

Failing to signal disapproval makes witnesses complicit. On the other hand, signaling favor toward individuals who behave altruistically simultaneously signals to others the altruism of the signaler. What’s important to note about this sort of indirect signaling is that it does not necessarily require the original offense or benevolent act to have actually occurred. People take a proclivity to favor the altruistic as evidence of altruism—even if the altruistic character is fictional. 

        That infants less than a year old respond to unfair or selfish behavior with negative emotions—and a readiness to punish—suggests that strong reciprocity has deep evolutionary roots in the human lineage. Humans’ profound emotional engagement with fictional characters and fictional exchanges probably derives from a long history of adapting to challenges whose Darwinian ramifications were far more serious than any attempt to while away some idle afternoons. Game theorists and evolutionary anthropologists have a good idea what those challenges might have been: for cooperativeness or altruism to be established and maintained as a norm within a group of conspecifics, some mechanism must be in place to prevent the exploitation of cooperative or altruistic individuals by selfish and devious ones. Flesch explains,

Darwin himself had proposed a way for altruism to evolve through the mechanism of group selection. Groups with altruists do better as a group than groups without. But it was shown in the 1960s that, in fact, such groups would be too easily infiltrated or invaded by nonaltruists—that is, that group boundaries are too porous—to make group selection strong enough to overcome competition at the level of the individual or the gene. (5)

If, however, individuals given to trying to take advantage of cooperative norms were reliably met with slaps on the head—or with ostracism in the wake of spreading gossip—any benefits they (or their genes) might otherwise count on to redound from their selfish behavior would be much diminished. Flesch’s theory is “that we have explicitly evolved the ability and desire to track others and to learn their stories precisely in order to punish the guilty (and somewhat secondarily to reward the virtuous)” (21). Before strong reciprocity was driving humans to bookstores, amphitheaters, and cinemas, then, it was serving the life-and-death cause of ensuring group cohesion and sealing group boundaries against neighboring exploiters. 

Game theory experiments that have been conducted since the early 1980s have consistently shown that people are willing, even eager to punish others whose behavior strikes them as unfair or exploitative, even when administering that punishment involves incurring some cost for the punisher. Like the Dictator Game, the Ultimatum Game involves two people, one of whom is given a sum of money and told to offer the other participant a cut. The catch in this scenario is that the second player must accept the cut or neither player gets to keep any money. “It is irrational for the responder not to accept any proposed split from the proposer,” Flesch writes. “The responder will always come out better by accepting than vetoing” (31). What the researchers discovered, though, was that a line exists beneath which responders will almost always refuse the cut. “This means they are paying to punish,” Flesch explains. “They are giving up a sure gain in order to punish the selfishness of the proposer” (31). Game theorists call this behavior altruistic punishment because “the punisher’s willingness to pay this cost may be an important part in enforcing norms of fairness” (31). In other words, the punisher is incurring a cost to him or herself in order to ensure that selfish actors don’t have a chance to get a foothold in the larger, cooperative group. 

The economic logic notwithstanding, it seems natural to most people that second players in Ultimatum Game experiments should signal their disapproval—or stand up for themselves, as it were—by refusing to accept insultingly meager proposed cuts. The cost of the punishment, moreover, can be seen as a symbol of various other types of considerations that might prevent a participant or a witness from stepping up or stepping in to protest. Discussing the Three-Player Dictator Game experiments conducted by Fehr and Fischbacher, Flesch points out that strong reciprocity is even more starkly contrary to any selfish accounting:

Note that the third player gets nothing out of paying to reward or punish except the power or agency to do just that. It is highly irrational for this player to pay to reward or punish, but again considerations of fairness trump rational self-interest. People do pay, and pay a substantial amount, when they think that someone has been treated notably unfairly, or when they think someone has evinced marked generosity, to affect what they have observed. (33)

Neuroscientists have even zeroed in on the brain regions that correspond to our suppression of immediate self-interest in the service of altruistic punishment, as well as those responsible for the pleasure we take in anticipating—though not in actually witnessing—free riders meeting with their just deserts (Knoch et al. 829Quevain et al. 1254). Outside of laboratories, though, the cost punishers incur can range from the risks associated with a physical confrontation to time and energy spent convincing skeptical peers a crime has indeed been committed.

Flesch lays out his theory of narrative interest in a book aptly titled Comeuppance:Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction. A cursory survey of mainstream fiction, in both blockbuster movies and best-selling novels, reveals the good guys versus bad guys dynamic as preeminent in nearly every plot, and much of the pleasure people get from the most popular narratives can quite plausibly be said to derive from the goodie prevailing—after a long, harrowing series of close calls and setbacks—while the baddie simultaneously gets his or her comeuppance. Audiences love to see characters get their just deserts. When the plot fails to deliver on this score, they walk away severely disturbed. That disturbance can, however, serve the author’s purposes, particularly when the goal is to bring some danger or injustice to readers’ or viewers’ attention, as in the case of novels like Orwell’s 1984. Plots, of course, seldom feature simple exchanges with meager stakes on the scale of game theory experiments, and heroes can by no means count on making it to the final scene both vindicated and rewarded—even in stories designed to give audiences exactly what they want. The ultimate act of altruistic punishment, and hence the most emotionally poignant behavior a character can engage in, is martyrdom. It’s no coincidence that the hero dies in the act of vanquishing the villain in so many of the most memorable books and movies.

            If narrative interest really does emerge out of a propensity to monitor each other’s behaviors for signs of a capacity for cooperation and to volunteer affect on behalf of altruistic individuals and against selfish ones they want to see get their comeuppance, the strong appeal of certain seemingly bad characters emerges as a mystery calling for explanation.  From England’s tradition of Byronic heroes like Rochester to America’s fascination with bad boys like Tom Sawyer, these characters win over audiences and stand out as perennial favorites even though at first blush they seem anything but eager to establish their nice guy bone fides. On the other hand, Rochester was eventually redeemed in Jane Eyre, and Tom Sawyer, though naughty to be sure, shows no sign whatsoever of being malicious. Tellingly, though, these characters, and a long list of others like them, also demonstrate a remarkable degree of cleverness: Rochester passing for a gypsy woman, for instance, or Tom Sawyer making fence painting out to be a privilege. One hypothesis that could account for the appeal of bad boys is that their badness demonstrates undeniably their ability to escape the negative consequences most people expect to result from their own bad behavior.

This type of demonstration likely functions in a way similar to another mechanism that many evolutionary biologists theorize must have been operating for cooperation to have become established in human societies, a process referred to as the handicap principle, or costly signaling. A lone altruist in any group is unlikely to fare well in terms of survival and reproduction. So the question arises as to how the minimum threshold of cooperators in a population was first surmounted. Flesch’s fellow evolutionary critic, Brian Boyd, in his book On the Origin of Stories, traces the process along a path from mutualism, or coincidental mutual benefits, to inclusive fitness, whereby organisms help others who are likely to share their genes—primarily family members—to reciprocal altruism, a quid pro quo arrangement in which one organism will aid another in anticipation of some future repayment (54-57). However, a few individuals in our human ancestry must have benefited from altruism that went beyond familial favoritism and tit-for-tat bartering.

In their classic book The Handicap Principal, Amotz and Avishag Zahavi suggest that altruism serves a function in cooperative species similar to the one served by a peacock’s feathers. The principle could also help account for the appeal of human individuals who routinely risk suffering consequences which deter most others. The idea is that conspecifics have much to gain from accurate assessments of each other’s fitness when choosing mates or allies. Many species have thus evolved methods for honestly signaling their fitness, and as the Zahavis explain, “in order to be effective, signals have to be reliable; in order to be reliable, signals have to be costly” (xiv). Peacocks, the iconic examples of the principle in action, signal their fitness with cumbersome plumage because their ability to survive in spite of the handicap serves as a guarantee of their strength and resourcefulness. Flesch and Boyd, inspired by evolutionary anthropologists, find in this theory of costly signaling the solution the mystery of how altruism first became established; human altruism is, if anything, even more elaborate than the peacock’s display. 

Humans display their fitness in many ways. Not everyone can be expected to have the wherewithal to punish free-riders, especially when doing so involves physical conflict. The paradoxical result is that humans compete for the status of best cooperator. Altruism is a costly signal of fitness. Flesch explains how this competition could have emerged in human populations:

If there is a lot of between-group competition, then those groups whose modes of costly signaling take the form of strong reciprocity, especially altruistic punishment, will outcompete those whose modes yield less secondary gain, especially less secondary gain for the group as a whole. (57)

Taken together, the evidence Flesch presents suggests the audiences of narratives volunteer affect on behalf of fictional characters who show themselves to be altruists and against those who show themselves to be selfish actors or exploiters, experiencing both frustration and delight in the unfolding of the plot as they hope to see the altruists prevail and the free-riders get their comeuppance. This capacity for emotional engagement with fiction likely evolved because it serves as a signal to anyone monitoring individuals as they read or view the story, or as they discuss it later, that they are disposed either toward altruistic punishment or toward third-order free-riding themselves—and altruism is a costly signal of fitness.

The hypothesis emerging from this theory of social monitoring and volunteered affect to explain the appeal of bad boy characters is that their bad behavior will tend to redound to the detriment of still worse characters. Bloom describes the results of another series of experiments with eight-month-old participants:

When the target of the action was itself a good guy, babies preferred the puppet who was nice to it. This alone wasn’t very surprising, given that the other studies found an overall preference among babies for those who act nicely. What was more interesting was what happened when they watched the bad guy being rewarded or punished. Here they chose the punisher. Despite their overall preference for good actors over bad, then, babies are drawn to bad actors when those actors are punishing bad behavior. (5)

These characters’ bad behavior will also likely serve an obvious function as costly signaling; they’re bad because they’re good at getting away with it. Evidence that the bad boy characters are somehow truly malicious—for instance, clear signals of a wish to harm innocent characters—or that they’re irredeemable would severely undermine the theory. As the first step toward a preliminary survey, the following sections examine two infamous instances in which literary characters whose creators intended audiences to recognize as bad nonetheless managed to steal the show from the supposed good guys.

(Watch Hamlin discussing the research in an interview from earlier today.)

And check out this video of the experiments.

Read More
Dennis Junk Dennis Junk

Campaigning Deities: Justifying the ways of Satan

Why do readers tend to admire Satan in Milton’s Paradise Lost? It’s one of the instances where a nominally bad character garners more attention and sympathy than the good guy, a conundrum I researched through an evolutionary lens as part of my master’s thesis.

[Excerpt from Hierarchies in Hell and Leaderless Fight Clubs: Altruism, Narrative Interest, and the Adaptive Appeal of Bad Boys, my master’s thesis]

Milton believed Christianity more than worthy of a poetic canon in the tradition of the classical poets, and Paradise Lost represents his effort at establishing one. What his Christian epic has offered for many readers over the centuries, however, is an invitation to weigh the actions and motivations of immortals in mortal terms. In the story, God becomes a human king, albeit one with superhuman powers, while Satan becomes an upstart subject. As Milton attempts to “justify the ways of God to Man,” he is taking it upon himself simultaneously, and inadvertently, to justify the absolute dominion of a human dictator. One of the consequences of this shift in perspective is the transformation of a philosophical tradition devoted to parsing the logic of biblical teachings into something akin to a political campaign between two rival leaders, each laying out his respective platform alongside a case against his rival. What was hitherto recondite and academic becomes in Milton’s work immediate and visceral.

Keats famously penned the wonderfully self-proving postulate, “Axioms in philosophy are not axioms until they are proved upon our pulses,” which leaves open the question of how an axiom might be so proved. Milton’s God responds to Satan’s approach to Earth, and his foreknowledge of Satan’s success in tempting the original pair, with a preemptive defense of his preordained punishment of Man:

…Whose fault?

Whose but his own? Ingrate! He had of Me

All he could have. I made him just and right,

Sufficient to have stood though free to fall.

Such I created all th’ ethereal pow’rs

And spirits, both them who stood and who failed:

Freely they stood who stood and fell who fell.

Not free, what proof could they have giv’n sincere

Of true allegiance, constant faith or love

Where only what they needs must do appeared,

Not what they would? What praise could they receive?

What pleasure I from such obedience paid

When will and reason… had served necessity,

Not me? (3.96-111)

God is defending himself against the charge that his foreknowledge of the fall implies that Man’s decision to disobey was borne of something other than his free will. What choice could there have been if the outcome of Satan’s temptation was predetermined? If it wasn’t predetermined, how could God know what the outcome would be in advance? God’s answer—of course I granted humans free will because otherwise their obedience would mean nothing—only introduces further doubt. Now we must wonder why God cherishes Man’s obedience so fervently. Is God hungry for political power? If we conclude he is—and that conclusion seems eminently warranted—then we find ourselves on the side of Satan. It’s not so much God’s foreknowledge of Man’s fall that undermines human freedom; it’s God’s insistence on our obedience, under threat of God’s terrible punishment.

Milton faces a still greater challenge in his attempt to justify God’s ways “upon our pulses” when it comes to the fallout of Man’s original act of disobedience. The Son argues on behalf of Man, pointing out that the original sin was brought about through temptation. If God responds by turning against Man, then Satan wins. The Son thus argues that God must do something to thwart Satan: “Or shall the Adversary thus obtain/ His end and frustrate Thine?” (3.156-7). Before laying out his plan for Man’s redemption, God explains why punishment is necessary:

…Man disobeying

Disloyal breaks his fealty and sins

Against the high supremacy of Heav’n,

Affecting godhead, and so, losing all,

To expiate his treason hath naught left

But to destruction sacred and devote

He with his whole posterity must die. (3. 203-9)

The potential contradiction between foreknowledge and free choice may be abstruse enough for Milton’s character to convincingly discount: “If I foreknew/ Foreknowledge had no influence on their fault/ Which had no less proved certain unforeknown” (3.116-9). There is another contradiction, however, that Milton neglects to take on. If Man is “Sufficient to have stood though free to fall,” then God must justify his decision to punish the “whole posterity” as opposed to the individuals who choose to disobey. The Son agrees to redeem all of humanity for the offense committed by the original pair. His knowledge that every last human will disobey may not be logically incompatible with their freedom to choose; if every last human does disobey, however, the case for that freedom is severely undermined. The axiom of collective guilt precludes the axiom of freedom of choice both logically and upon our pulses.

In characterizing disobedience as a sin worthy of severe punishment—banishment from paradise, shame, toil, death—an offense he can generously expiate for Man by sacrificing the (his) Son, God seems to be justifying his dominion by pronouncing disobedience to him evil, allowing him to claim that Man’s evil made it necessary for him to suffer a profound loss, the death of his offspring. In place of a justification for his rule, then, God resorts to a simple guilt trip.

Man shall not quite be lost but saved who will,

Yet not of will in him but grace in me

Freely vouchsafed. Once more I will renew

His lapsed pow’rs though forfeit and enthralled

By sin to foul exorbitant desires.

Upheld by me, yet once more he shall stand

On even ground against his mortal foe,

By me upheld that he may know how frail

His fall’n condition is and to me owe

All his deliv’rance, and to none but me. (3.173-83)

Having decided to take on the burden of repairing the damage wrought by Man’s disobedience to him, God explains his plan:

Die he or justice must, unless for him

Some other as able and as willing pay

The rigid satisfaction, death for death. (3.210-3)

He then asks for a volunteer. In an echo of an earlier episode in the poem which has Satan asking for a volunteer to leave hell on a mission of exploration, there is a moment of hesitation before the Son offers himself up to die on Man’s behalf.

…On Me let thine anger fall.

Account Me Man. I for his sake will leave

Thy bosom and this glory next to Thee

Freely put off and for him lastly die

Well pleased. On Me let Death wreck all his rage! (3.37-42)

This great sacrifice, which is supposed to be the basis of the Son’s privileged status over the angels, is immediately undermined because he knows he won’t stay dead for long: “Yet that debt paid/ Thou wilt not leave me in the loathsome grave” (246-7). The Son will only die momentarily. This sacrifice doesn’t stack up well against the real risks and sacrifices made by Satan.

All the poetry about obedience and freedom and debt never takes on the central question Satan’s rebellion forces readers to ponder: Does God deserve our obedience? Or are the labels of good and evil applied arbitrarily? The original pair was forbidden from eating from the Tree of Knowledge—could they possibly have been right to contravene the interdiction? Since it is God being discussed, however, the assumption that his dominion requires no justification, that it is instead simply in the nature of things, might prevail among some readers, as it does for the angels who refuse to join Satan’s rebellion. The angels, after all, owe their very existence to God, as Abdiel insists to Satan. Who, then, are any of them to question his authority? This argument sets the stage for Satan’s remarkable rebuttal:

…Strange point and new!

Doctrine which we would know whence learnt: who saw

When this creation was? Remember’st thou

Thy making while the Maker gave thee being?

We know no time when we were not as now,

Know none before us, self-begot, self-raised

By our own quick’ning power…

Our puissance is our own. Our own right hand

Shall teach us highest deeds by proof to try

Who is our equal. (5.855-66)

Just as a pharaoh could claim credit for all the monuments and infrastructure he had commissioned the construction of, any king or dictator might try to convince his subjects that his deeds far exceed what he is truly capable of. If there’s no record and no witness—or if the records have been doctored and the witnesses silenced—the subjects have to take the king’s word for it.

That God’s dominion depends on some natural order, which he himself presumably put in place, makes his tendency to protect knowledge deeply suspicious. Even the angels ultimately have to take God’s claims to have created the universe and them along with it solely on faith. Because that same unquestioning faith is precisely what Satan and the readers of Paradise Lost are seeking a justification for, they could be forgiven for finding the answer tautological and unsatisfying. It is the Tree of Knowledge of Good and Evil that Adam and Eve are forbidden to eat fruit from. When Adam, after hearing Raphael’s recounting of the war in heaven, asks the angel how the earth was created, he does receive an answer, but only after a suspicious preamble:

…such commission from above

I have received to answer thy desire

Of knowledge with bounds. Beyond abstain

To ask nor let thine own inventions hope

Things not revealed which the invisible King

Only omniscient hath suppressed in night,

To none communicable in Earth or Heaven:

Enough is left besides to search and know. (7.118-125)

Raphael goes on to compare knowledge to food, suggesting that excessively indulging curiosity is unhealthy. This proscription of knowledge reminded Shelley of the Prometheus myth. It might remind modern readers of The Wizard of Oz—“Pay no attention to that man behind the curtain”—or to the space monkeys in Fight Club, who repeatedly remind us that “The first rule of Project Mayhem is, you do not ask questions.” It may also resonate with news about dictators in Asia or the Middle East trying to desperately to keep social media outlets from spreading word of their atrocities.

Like the protesters of the Arab Spring, Satan is putting himself at great risk by challenging God’s authority. If God’s dominion over Man and the angels is evidence not of his benevolence but of his supreme selfishness, then Satan’s rebellion becomes an attempt at altruistic punishment. The extrapolation from economic experiments like the ultimatum and dictator games to efforts to topple dictators may seem like a stretch, especially if humans are predisposed to forming and accepting positions in hierarchies, as a casual survey of virtually any modern organization suggests is the case.

Organized institutions, however, are a recent development in terms of human evolution. The English missionary Lucas Bridges wrote about his experiences with the Ona foragers in Tierra del Fuego in his 1948 book Uttermost Part of the Earth, and he expresses his amusement at his fellow outsiders’ befuddlement when they learn about the Ona’s political dynamics:

A certain scientist visited our part of the world and, in answer to his inquiries on this matter, I told him that the Ona had no chieftains, as we understand the word. Seeing that he did not believe me, I summoned Kankoat, who by that time spoke some Spanish. When the visitor repeated his question, Kankoat, too polite to answer in the negative, said: “Yes, senor, we, the Ona, have many chiefs. The men are all captains and all the women are sailors” (quoted in Boehm 62).

At least among Ona men, it seems there was no clear hierarchy. The anthropologist Richard Lee discovered a similar dynamic operating among the !Kung foragers of the Kalahari. In order to ensure that no one in the group can attain an elevated status which would allow him to dominate the others, several leveling mechanisms are in place. Lee quotes one of his informants:

When a young man kills much meat, he comes to think of himself as a chief or a big man, and he thinks of the rest of us as his servants or inferiors. We can’t accept this. We refuse one who boasts, for someday his pride will make him kill somebody. So we always speak of his meat as worthless. In this way we cool his heart and make him gentle. (quoted in Boehm 45)

These examples of egalitarianism among nomadic foragers are part of anthropologist Christopher Boehm’s survey of every known group of hunter-gatherers. His central finding is that “A distinctively egalitarian political style is highly predictable wherever people live in small, locally autonomous social and economic groups” (35-36). This finding bears on any discussion of human evolution and human nature because small groups like these constituted the whole of humanity for all but what amounts to the final instants of geological time.

Also read:

THE ADAPTIVE APPEAL OF BAD BOYS

SYMPATHIZING WITH PSYCHOS: WHY WE WANT TO SEE ALEX ESCAPE HIS FATE AS A CLOCKWORK ORANGE

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

Read More
Dennis Junk Dennis Junk

Seduced by Satan

As long as the group they belong to is small enough for each group member to monitor the actions of the others, people can maintain strict egalitarianism, giving up whatever dominance they may desire for the assurance of not being dominated themselves. Satan very likely speaks to this natural ambivalence in humans. Benevolent leaders win our love and admiration through their selflessness and charisma. But no one wants to be a slave.

[Excerpt from Hierarchies in Hell and Leaderless Fight Clubs: Altruism, Narrative Interest, and the Adaptive Appeal of Bad Boys, my master’s thesis]

Why do we like the guys who seem not to care whether or not what they’re doing is right, but who often manage to do what’s right anyway? In the Star Wars series, Han Solo is introduced as a mercenary, concerned only with monetary reward. In the first episode of Mad Men, audiences see Don Draper saying to a woman that they should get married, and then in the final scene he arrives home to his actual wife. Tony Soprano, Jack Sparrow, Tom Sawyer, the list of male characters who flout rules and conventions, who lie, cheat and steal, but who nevertheless compel the attention, the favor, even the love of readers and moviegoers would be difficult to exhaust.

John Milton has been accused of both betraying his own and inspiring others' sympathy and admiration for what should be the most detestable character imaginable. When he has Satan, in Paradise Lost, say, “Better to reign in hell than serve in heaven,” many believed he was signaling his support of the king of England’s overthrow. Regicidal politics are well and good—at least from the remove of many generations—but voicing your opinions through such a disreputable mouthpiece? That’s difficult to defend. Imagine using a fictional Hitler to convey your stance on the current president.

Stanley Fish theorizes that Milton’s game was a much subtler one: he didn’t intend for Satan to be sympathetic so much as seductive, so that in being persuaded and won over to him readers would be falling prey to the same temptation that brought about the fall. As humans, all our hearts are marked with original sin. So if many readers of Milton’s magnum opus come away thinking Satan may have been in the right all along, the failure wasn’t the author’s unconstrained admiration for the rebel angel so much as it was his inability to adequately “justify the ways of God to men.” God’s ways may follow a certain logic, but the appeal of Satan’s ways is deeper, more primal.

In the “Argument,” or summary, prefacing Book Three, Milton relays some of God’s logic: “Man hath offended the majesty of God by aspiring to godhead and therefore, with all his progeny devoted to death, must die unless someone can be found sufficient to answer for his offence and undergo his punishment.” The Son volunteers. This reasoning has been justly characterized as “barking mad” by Richard Dawkins. But the lines give us an important insight into what Milton saw as the principle failing of the human race, their ambition to be godlike. It is this ambition which allows us to sympathize with Satan, who incited his fellow angels to rebellion against the rule of God.

In Book Five, we learn that what provoked Satan to rebellion was God’s arbitrary promotion of his own Son to a status higher than the angels: “by Decree/ Another now hath to himself ingross’t/ All Power, and us eclipst under the name/ Of King anointed.” Citing these lines, William Flesch explains, “Satan’s grandeur, even if it is the grandeur of archangel ruined, comes from his iconoclasm, from his desire for liberty.” At the same time, however, Flesch insists that, “Satan’s revolt is not against tyranny. It is against a tyrant whose place he wishes to usurp.” So, it’s not so much freedom from domination he wants, according to Flesch, as the power to dominate.

Anthropologist Christopher Boehm describes the political dynamics of nomadic peoples in his book Hierarchy in the Forest: TheEvolution of Egalitarian Behavior, and his descriptions suggest that parsing a motive of domination from one of preserving autonomy is much more complicated than Flesch’s analysis assumes. “In my opinion,” Boehm writes, “nomadic foragers are universally—and all but obsessively—concerned with being free from the authority of others” (68). As long as the group they belong to is small enough for each group member to monitor the actions of the others, people can maintain strict egalitarianism, giving up whatever dominance they may desire for the assurance of not being dominated themselves.

Satan very likely speaks to this natural ambivalence in humans. Benevolent leaders win our love and admiration through their selflessness and charisma. But no one wants to be a slave. Does Satan’s admirable resistance and defiance shade into narcissistic self-aggrandizement and an unchecked will to power? If so, is his tyranny any more savage than that of God? And might there even be something not altogether off-putting about a certain degree self-indulgent badness?

Also read:

CAMPAIGNING DEITIES: JUSTIFYING THE WAYS OF SATAN

THE ADAPTIVE APPEAL OF BAD BOYS

SYMPATHIZING WITH PSYCHOS: WHY WE WANT TO SEE ALEX ESCAPE HIS FATE AS A CLOCKWORK ORANGE

Read More
Dennis Junk Dennis Junk

T.J. Eckleburg Sees Everything: The Great God-Gap in Gatsby part 2 of 2

The simple explanation for Fitzgerald’s decision not to gratify his readers but rather to disappoint and disturb them is that he wanted his novel to serve as an indictment of the types of behavior that are encouraged by the social conditions he describes in the story, conditions which would have been easily recognizable to many readers of his day and which persist into the Twenty-First Century.

Read part 1

Though The Great Gatsby does indeed tell a story of punishment, readers are left with severe doubts as to whether those who receive punishment actually deserve it. Gatsby is involved in criminal activities, and he has an affair with a married woman. Myrtle likewise is guilty of adultery. But does either deserve to die? What about George Wilson? His is the only attempt in the novel at altruistic punishment. So natural is his impulse toward revenge, however, and so given are readers to take that impulse for granted, that its function in preserving a broader norm of cooperation requires explanation. Flesch describes a series of experiments in the field of game theory centering on an exchange called the ultimatum game. One participant is given a sum of money and told he or she must propose a split with a second participant, with the proviso that if the second person rejects the cut neither will get to keep anything. Flesch points out, however, that

It is irrational for the responder not to accept any proposed split from the proposer. The responder will always come out better by accepting than by vetoing. And yet people generally veto offers of less than 25 percent of the original sum. This means they are paying to punish. They are giving up a sure gain in order to punish the selfishness of the proposer. (31)

To understand why George’s attempt at revenge is altruistic, consider that he had nothing to gain, from a purely selfish and rational perspective, and much to lose by killing the man he believed killed his wife. He was risking physical harm if a fight ensued. He was risking arrest for murder. Yet if he failed to seek revenge readers would likely see him as somehow less than human. His quest for justice, as futile and misguided as it is, would likely endear him to readers—if the discovery of how futile and misguided it was didn’t precede their knowledge of it taking place. Readers, in fact, would probably respond more favorably toward George than any other character in the story, including the narrator. But the author deliberately prevents this outcome from occurring.

The simple explanation for Fitzgerald’s decision not to gratify his readers but rather to disappoint and disturb them is that he wanted his novel to serve as an indictment of the types of behavior that are encouraged by the social conditions he describes in the story, conditions which would have been easily recognizable to many readers of his day and which persist into the Twenty-First Century. Though the narrator plays the role of second-order free-rider, the author clearly signals his own readiness to punish by publishing his narrative about such bad behavior perpetrated by characters belonging to a particular group of people, a group corresponding to one readers might encounter outside the realm of fiction.

Fitzgerald makes it obvious in the novel that beyond Tom’s simple contempt for George there exist several more severe impediments to what biologists would call group cohesion but that most readers would simply refer to as a sense of community. The idea of a community as a unified entity whose interests supersede those of the individuals who make it up is something biological anthropologists theorize religion evolved to encourage. In his book Darwin’s Cathedral, in which he attempts to explain religion in terms of group selection theory, David Sloan Wilson writes:

A group of people who abandon self-will and work tirelessly for a greater good will fare very well as a group, much better than if they all pursue their private utilities, as long as the greater good corresponds to the welfare of the group. And religions almost invariably do link the greater good to the welfare of the community of believers, whether an organized modern church or an ethnic group for whom religion is thoroughly intermixed with the rest of their culture. Since religion is such an ancient feature of our species, I have no problem whatsoever imagining the capacity for selflessness and longing to be part of something larger than ourselves as part of our genetic and cultural heritage. (175)

One of the main tasks religious beliefs evolved to handle would have been addressing the same “free-rider problem” William Flesch discovers at the heart of narrative. What religion offers beyond the social monitoring of group members is the presence of invisible beings whose concerns are tied to the collective concerns of the group.

Obviously, Tom Buchanan’s sense of community has clear demarcations. “Civilization is going to pieces,” he warns Nick as prelude to his recommendation of a book titled “The Rise of the Coloured Empires.” “The idea,” Tom explains, “is that if we don’t look out the white race will be—will be utterly submerged” (17). “We’ve got to beat them down,” Daisy helpfully, mockingly chimes in (18). While this animosity toward members of other races seems immoral at first glance, in the social context the Buchanans inhabit it actually represents a concern for the broader group, “the white race.” But Tom’s animosity isn’t limited to other races. What prompts Catherine to tell Nick how her sister “can’t stand” her husband during the gathering in Tom and Myrtle’s apartment is in fact Tom’s ridiculing of George. In response to another character’s suggestion that he’d like to take some photographs of people in Long Island “if I could get the entry,” Tom jokingly insists to Myrtle that she should introduce the man to her husband. Laughing at his own joke, Tom imagines a title for one of the photographs: “‘George B. Wilson at the Gasoline Pump,’ or something like that” (37). Disturbingly, Tom’s contempt for George based on his lowly social status has contaminated Myrtle as well. Asked by her sister why she married George in the first place, she responds, “I married him because I thought he was a gentleman…I thought he knew something about breeding but he wasn’t fit to lick my shoe” (39). Her sense of superiority, however, is based on the artificial plan for her and Tom to get married.

That Tom’s idea of who belongs to his own superior community is determined more by “breeding” than by economic success—i.e. by birth and not accomplishment—is evidenced by his attitude toward Gatsby. In a scene that has Tom stopping with two friends, a husband and wife, at Gatsby’s mansion while riding horses, he is shocked when Gatsby shows an inclination to accept an invitation to supper extended by the woman, who is quite drunk. Both the husband and Tom show their disapproval. “My God,” Tom says to Nick, “I believe the man’s coming…Doesn’t he know she doesn’t want him?” (109). When Nick points out that woman just said she did want him, Tom answers, “he won’t know a soul there.” Gatsby’s statement in the same scene that he knows Tom’s wife provokes him, as soon as Gatsby has left the room, to say, “By God, I may be old-fashioned in my ideas but women run around too much these days to suit me. They meet all kinds of crazy fish” (110). In a later scene that has Tom accompanying Daisy, with Nick in tow, to one of Gatsby’s parties, he asks, “Who is this Gatsby anyhow?... Some big bootlegger?” When Nick says he’s not, Tom says, “Well, he certainly must have strained himself to get this menagerie together” (114). Even when Tom discovers that Gatsby and Daisy are having an affair, he still doesn’t take Gatsby seriously. He calls Gatsby “Mr. Nobody from Nowhere” (137), and says, “I’ll be damned if I see how you got within a mile of her unless you brought the groceries to the back door” (138). Once he’s succeeded in scaring Daisy with suggestions of Gatsby’s criminal endeavors, Tom insists the two drive home together, saying, “I think he realizes that his presumptuous little flirtation is over” (142).

When George Wilson looks to the eyes of Dr. Eckleburg in supplication after that very car ride leads to Myrtle’s death, the fact that this “God” is an advertisement, a supplication in its own right to viewers on behalf of the optometrist to boost his business, symbolically implicates the substitution of markets for religion—or a sense of common interest—as the main factor behind Tom’s superciliously careless sense of privilege. The eyes seem such a natural stand-in for an absent God that it’s easy to take the symbolic logic for granted without wondering why George might mistake them as belonging to some sentient agent. Evolutionary psychologist Jesse Bering takes on that very question in The God Instinct: The Psychology of Souls, Destiny, and the Meaning of Life, where he cites research suggesting that “attributing moral responsibility to God is a sort of residual spillover from our everyday social psychology dealing with other people” (138). Bering theorizes that humans’ tendency to assume agency behind even random physical events evolved as a by-product of our profound need to understand the motives and intentions of our fellow humans: “When the emotional climate is just right, there’s hardly a shape or form that ‘evidence’ cannot assume. Our minds make meaning by disambiguating the meaningless” (99). In place of meaningless events, humans see intentional signs.

According to Bering’s theory, George Wilson’s intense suffering would have made him desperate for some type of answer to the question of why such tragedy has befallen him. After discussing research showing that suffering, as defined by societal ills like infant mortality and violent crime, and “belief in God were highly correlated,” Bering suggests that thinking of hardship as purposeful, rather than random, helps people cope because it allows them to place what they’re going through in the context of some larger design (139). What he calls “the universal common denominator” to all the permutations of religious signs, omens, and symbols, is the same cognitive mechanism, “theory of mind,” that allows humans to understand each other and communicate so effectively as groups. “In analyzing things this way,” Bering writes,

we’re trying to get into God’s head—or the head of whichever culturally constructed supernatural agent we have on offer… This is to say, just like other people’s surface behaviors, natural events can be perceived by us human beings as being about something other than their surface characteristics only because our brains are equipped with the specialized cognitive software, theory of mind, that enables us to think about underlying psychological causes. (79)

So George, in his bereaved and enraged state, looks at a billboard of a pair of eyes and can’t help imagining a mind operating behind them, one whose identity he’s learned to associate with a figure whose main preoccupation is the judgment of individual humans’ moral standings. According to both David Sloan Wilson and Jesse Bering, though, the deity’s obsession with moral behavior is no coincidence.

Covering some of the same game theory territory as Flesch, Bering points out that the most immediate purpose to which we put our theory of mind capabilities is to figure out how altruistic or selfish the people around us are. He explains that

in general, morality is a matter of putting the group’s needs ahead of one’s own selfish interests. So when we hear about someone who has done the opposite, especially when it comes at another person’s obvious expense, this individual becomes marred by our social judgment and grist for the gossip mills. (183)

Having arisen as a by-product of our need to monitor and understand the motives of other humans, religion would have been quickly co-opted in the service of solving the same free-rider problem Flesch finds at the heart of narratives. Alongside our concern for the reputations of others is a close guarding of our own reputations. Since humans are given to assuming agency is involved even in random events like shifts in weather, group cohesion could easily have been optimized with the subtlest suggestion that hidden agents engage in the same type of monitoring as other, fully human members of the group. Bering writes:

For many, God represents that ineradicable sense of being watched that so often flares up in moments of temptation—He who knows what’s in our hearts, that private audience that wants us to act in certain ways at critical decision-making points and that will be disappointed in us otherwise. (191)

Bering describes some of his own research that demonstrates this point. Coincident with the average age at which children begin to develop a theory of mind (around 4), they began responding to suggestions that they’re being watched by an invisible agent—named Princess Alice in honor of Bering’s mother—by more frequently resisting the temptation to avail themselves of opportunities to cheat that were built into the experimental design of a game they were asked to play (Piazza et al. 311-20). An experiment with adult participants, this time told that the ghost of a dead graduate student had been seen in the lab, showed the same results; when competing in a game for fifty dollars, they were much less likely to cheat than others who weren’t told the ghost story (Bering 193).

Bering also cites a study that has even more immediate relevance to George Wilson’s odd behavior vis-à-vis Dr. Eckleburg’s eyes. In “Cues of Being Watched Enhance Cooperation in a Real-World Setting,” the authors describe an experiment in which they tested the effects of various pictures placed near an “honesty box,” where people were supposed to be contributing money in exchange for milk and tea. What they found is that when the pictures featured human eyes more people contributed more money than when they featured abstract patterns of flowers. They theorize that

images of eyes motivate cooperative behavior because they induce a perception in participants of being watched. Although participants were not actually observed in either of our experimental conditions, the human perceptual system contains neurons that respond selectively to stimuli involving faces and eyes…, and it is therefore possible that the images exerted an automatic and unconscious effect on the participants’ perception that they were being watched. Our results therefore support the hypothesis that reputational concerns may be extremely powerful in motivating cooperative behavior. (2) (But also see Sparks et al. for failed replications)

This study also suggests that, while Fitzgerald may have meant the Dr. Eckleburg sign as a nod toward religion being supplanted by commerce, there is an alternate reading of the scene that focuses on the sign’s more direct impact on George Wilson. In several scenes throughout the novel, Wilson shows his willingness to acquiesce in the face of Tom’s bullying. Nick describes him as “spiritless” and “anemic” (29). It could be that when he says “God sees everything” he’s in fact addressing himself because he is tempted not to pursue justice—to let the crime go unpunished and thus be guilty himself of being a second-order free-rider. He doesn’t, after all, exert any great effort to find and kill Gatsby, and he kills himself immediately thereafter anyway.

Religion in Gatsby does, of course, go beyond some suggestive references to an empty placeholder. Nick ends the story with a reflection on how “Gatsby believed in the green light,” the light across the bay which he knew signaled Daisy’s presence in the mansion she lived in there. But for Gatsby it was also “the orgastic future that year by year recedes before us. It eluded us then, but that’s no matter—tomorrow we will run faster, stretch out our arms farther… And one fine morning—” (189). Earlier Nick had explained how Gatsby “talked a lot about the past and I gathered that he wanted to recover something, some idea of himself perhaps, that had gone into loving Daisy.” What that idea was becomes apparent in the scene describing Gatsby and Daisy’s first kiss, which occurred years prior to the events of the plot. “He knew that when he kissed this girl, and forever wed his unutterable visions to her perishable breath, his mind would never romp again like the mind of God… At his lips’ touch she blossomed for him like a flower and the incarnation was complete” (117). In place of some mind in the sky, the design Americans are encouraged to live by is one they have created for themselves. Unfortunately, just as there is no mind behind the eyes of Doctor T.J. Eckleburg, the designs many people come up with for themselves are based on tragically faulty premises.

The replacement of religiously inspired moral principles with selfish economic and hierarchical calculations, which Dr. Eckleburg so perfectly represents, is what ultimately leads to all the disgraceful behavior Nick describes. He writes, “They were careless people, Tom and Daisy—they smashed up things and people and creatures and then retreated back into their money or their vast carelessness or whatever it was that kept them together, and let other people clean up the mess” (188). Game theorist and behavioral economist Robert Frank, whose earlier work greatly influenced William Flesch’s theories of narrative, has recently written about how the same social dynamics Fitzgerald lamented are in place again today. In The Darwin Economy, he describes what he calls an “expenditure cascade”:

The explosive growth of CEO pay in recent decades, for example, has led many executives to build larger and larger mansions. But those mansions have long since passed the point at which greater absolute size yields additional utility… Top earners build bigger mansions simply because they have more money. The middle class shows little evidence of being offended by that. On the contrary, many seem drawn to photo essays and TV programs about the lifestyles of the rich and famous. But the larger mansions of the rich shift the frame of reference that defines acceptable housing for the near-rich, who travel in many of the same social circles… So the near-rich build bigger, too, and that shifts the relevant framework for others just below them, and so on, all the way down the income scale. By 2007, the median new single-family house built in the United States had an area of more than 2,300 square feet, some 50 percent more than its counterpart from 1970. (61-2)

How exactly people are straining themselves to afford these houses would be a fascinating topic for Fitzgerald’s successors. But one thing is already abundantly clear: it’s not the CEOs who are cleaning up the mess.

Also read:
HOW TO GET KIDS TO READ LITERATURE WITHOUT MAKING THEM HATE IT

WHY THE CRITICS ARE GETTING LUHRMANN'S GREAT GATSBY SO WRONG

WHY SHAKESPEARE NAUSEATED DARWIN: A REVIEW OF KEITH OATLEY'S "SUCH STUFF AS DREAMS"

Read More
Dennis Junk Dennis Junk

T.J. Eckleburg Sees Everything: The Great God-Gap in Gatsby part 1 of 2

So profound is humans’ concern for their reputations that they can even be nudged toward altruistic behaviors by the mere suggestion of invisible witnesses or the simplest representation of watching eyes. The billboard featuring Dr. Eckleburg’s eyes, however, holds no sway over George’s wife Myrtle, or the man she has an affair with. That this man, Tom Buchanan, has such little concern for his reputation—or that he simply feels entitled to exploit Myrtle—serves as an indictment of the social and economic inequality in the America of Fitzgerald’s day.

            When George Wilson, in one of the most disturbing scenes in F. Scott Fitzgerald’s classic The Great Gatsby, tells his neighbor that “God sees everything” while staring disconsolately at the weathered advertisement of some long-ago optometrist named T.J. Eckleburg, his longing for a transcendent authority who will mete out justice on his behalf pulls at the hearts of readers who realize his plea will go unheard. Anthropologists and psychologists studying the human capacity for cooperation and altruism are coming to view religion as an important factor in our evolution. Since the cooperative are always at risk of being exploited by the selfish, mechanisms to enforce altruism had to be in place for any tendency to behave for the benefit of others to evolve. The most basic of these mechanisms is a constant awareness of our own and our neighbors’ reputations. Humans, research has shown, are far more tempted to behave selfishly when they believe it won’t harm their reputations—i.e. when they believe no witnesses are present.

So profound is humans’ concern for their reputations that they can even be nudged toward altruistic behaviors by the mere suggestion of invisible witnesses or the simplest representation of watching eyes. The billboard featuring Dr. Eckleburg’s eyes, however, holds no sway over George’s wife Myrtle, or the man she has an affair with. That this man, Tom Buchanan, has such little concern for his reputation—or that he simply feels entitled to exploit Myrtle—serves as an indictment of the social and economic inequality in the America of Fitzgerald’s day, which carved society into hierarchically arranged echelons and exposed the have-nots to the careless depredations of the haves.

Nick Carraway, the narrator, begins the story by recounting a lesson he learned from his father as part of his Midwestern upbringing. “Whenever you feel like criticizing anyone,” Nick’s father had told him, “just remember that all the people in this world haven’t had the advantages that you’ve had”(5). This piece of wisdom serves at least two purposes: it explains Nick’s self-proclaimed inclination to “reserve all judgments,” highlighting the severity of the wrongdoings which have prompted him to write the story; and it provides an ironic moral lens through which readers view the events of the plot. What is to be made, in light of Nick’s father’s reminder about unevenly parceled out advantages, of the crimes committed by wealthy characters like Tom and Daisy Buchanan?

The focus on morality notwithstanding, religion plays a scant, but surprising, role in The Great Gatsby. It first appears in a conversation between Nick and Catherine, the sister of Myrtle Wilson. Catherine explains to Nick that neither Tom nor Myrtle “can stand the person they’re married to” (37). To the obvious question of why they don’t simply leave their spouses, Catherine responds that it’s Daisy, Tom’s wife, who represents the sole obstacle to the lovers’ happiness. “She’s a Catholic,” Catherine says, “and they don’t believe in divorce” (38). However, Nick explains that “Daisy was not a Catholic,” and he goes on to admit, “I was a little shocked by the elaborateness of the lie.” The conversation takes place at a small gathering hosted by Tom and Myrtle in an apartment rented, it seems, for the sole purpose of giving the two a place to meet. Before Nick leaves the party, he witnesses an argument between the hosts over whether Myrtle has any right to utter Daisy’s name which culminates in Tom striking her and breaking her nose. Obviously, Tom doesn’t despise his wife as much as Myrtle does her husband. And the lie about Daisy’s religious compunctions serves simply to justify Tom’s refusal to leave her and facilitate his continued exploitation of Myrtle.

The only other scene in which a religious belief is asserted explicitly is the one featuring the conversation between George and his neighbor. It comes after Myrtle, whose dalliance had finally aroused her husband’s suspicion, has been struck by a car and killed. George, upon discovering that something had been going on behind his back, locked Myrtle in his garage, and it was when she escaped and ran out into the road to stop the car she thought Tom was driving that she got hit. As the dust from the accident settles—literally, since the garage and the stretch of road are situated in a “valley of ashes” created by the remnants of the coal powering the nearby city being dumped alongside the adjacent rail tracks—George is left alone with a fellow inhabitant of the valley, a man named Michaelis, who asks if he belongs to a church where there might a be a priest he can call to come comfort him. “Don’t belong to one,” George answers (165). He does, however, describe a religious belief of sorts to Michaelis. Having explained why he’d begun to suspect Myrtle was having an affair, George goes on to say, “I told her she might fool me but she couldn’t fool God. I took her to the window.” He walks to the window again as he’s telling the story to his neighbor. “I said, ‘God knows what you’ve been doing, everything you’ve been doing. You may fool me but you can’t fool God!’” (167). Michaelis, who is by now fearing for George’s sanity, notices something disturbing as he stands listening to this rant: “Standing behind him Michaelis saw with a shock that he was looking at the eyes of Doctor T.J. Eckleburg which had just emerged pale and enormous from the dissolving night” (167). When George speaks again, repeating, “God sees everything,” Michaelis feels compelled to assure him, “That’s an advertisement” (167). Though when George first expresses the sentiment, part declaration, part plea, he was clearly thinking of Myrtle’s crime against him, when he repeats it he seems to be thinking of the driver’s crime against Myrtle. God may have seen it, but George takes it upon himself to deliver the punishment.

George Wilson’s turning to God for some moral accounting, despite his general lack of religious devotion, mirrors Nick Carraway’s efforts to settle the question of culpability, despite his own professed reluctance to judge, through the telling of this tragic story. Nick learns from Gatsby that it was in fact Daisy, with whom Gatsby has been carrying on an affair, who was behind the wheel of the car that killed Myrtle. But Gatsby, who was in the passenger seat, assures him it was an accident, not revenge for the affair Myrtle was carrying on with Daisy’s husband. Yet when George finally leaves his garage and turns to Tom to find out who owns the car that killed his wife, assuming it is the same man his wife was cheating on him with, Tom informs him the car belongs to Gatsby, leaving out the crucial fact that Gatsby never met Myrtle. George goes to Gatsby’s mansion, finds him in his pool, shoots and kills him, and then turns the gun on himself. Three people end up dead, Myrtle, George, and Gatsby. Despite their clear complicity, though, Tom and Daisy experience nary a repercussion beyond the natural grief of losing their lovers. Insofar as Nick believes the Buchanans’ perfect getaway is an intolerable injustice, he must realize he holds the power to implicate them, to damage their reputations, by writing and publishing his account of the incidents leading up to the deaths.

Evolutionary critic William Flesch sees our human passion for narrative as a manifestation of our obsession with our own and our fellow humans’ reputations, which evolved at least in part to keep track of each other’s propensities for moral behavior. In Comeuppance: Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction, Flesch lays out his attempt at solving what he calls “the puzzle of narrative interest,” by which he means the question of why people feel “anxiety on behalf of and about the motives, actions, and experiences of fictional characters” (7). He finds the key to solving this puzzle in a concept called “strong reciprocity,” whereby “the strong reciprocator punishes and rewards others for their behavior toward any member of the social group, and not just or primarily for their individual interactions with the reciprocator” (22). An example of this phenomenon takes place in the novel when the guests at Gatsby’s parties gossip and ardently debate about which of the rumors circling their host are true—particularly of interest is the one saying that “he killed a man” (48). Flesch cites reports from experiments demonstrating that in uneven exchanges, participants with no stake in the outcome are actually willing to incur some cost to themselves in an effort to enforce fairness (31-5). He then goes on to give a compelling account of how this tendency goes a long way toward an explanation of our human fascination with storytelling.

Flesch’s theory of narrative interest begins with models of the evolution of cooperation. For the first groups of human ancestors to evolve cooperative or altruistic traits, they would have had to solve what biologists and game theorists call “the free-rider problem.” Flesch explains:

Darwin himself had proposed a way for altruism to evolve through a mechanism of group selection. Groups with altruists do better as a group than groups without. But it was shown in the 1960s that, in fact, such groups would be too easily infiltrated or invaded by nonaltruists—that is, that group boundaries were too porous—to make group selection strong enough to overcome competition at the level of the individual or the gene. (5)

Strong, or indirect reciprocity, coupled with a selfish concern for one’s own reputation, may have evolved as mechanisms to address this threat of exploitative non-cooperators. For instance, in order for Tom Buchanan to behave selfishly by sleeping with George Wilson’s wife, he had to calculate his chances of being discovered in the act and punished. Interestingly, after “exchanging a frown with Doctor Eckleburg” while speaking to Nick in an early scene in Wilson’s garage, Tom suggests his motives for stealing away with Myrtle are at least somewhat noble. “Terrible place,” he says of the garage and the valley of ashes. “It does her good to get away” (30). Nick, clearly uncomfortable with the position Tom has put him in, where he has to choose whether to object to Tom’s behavior or play the role of second-order free-rider himself, poses the obvious question: “Doesn’t her husband object?” To which Tom replies, “He’s so dumb he doesn’t know he’s alive” (30). Nick, inclined to reserve judgment, keeps Tom and Myrtle’s secret. Later in the novel, though, he keeps the same secret for Daisy and Gatsby.

What makes Flesch’s theory so compelling is that it sheds light on the roles played by everyone from the author, in this case Fitzgerald, to the readers, to the characters, whose nonexistence beyond the pages of the novel is little obstacle to their ability to arouse sympathy or ire. Just as humans are keen to ascertain the relative altruism of their neighbors, so too are they given to broadcasting signals of their own altruism. Flesch explains, “we track not only the original actor whose actions we wish to see reciprocated, whether through reward or more likely punishment; we track as well those who are in a position to track that actor, and we track as well those in a position to track those tracking the actor” (50). What this means is that even if the original “actor” is fictional, readers can signal their own altruism by becoming emotionally engaged in the outcome of the story, specifically by wanting to see altruistic characters rewarded and selfish characters punished.

Nick Carraway is tracking Tom Buchanan’s actions, for instance. Reading the novel, we have little doubt what Nick’s attitude toward Tom is, especially as the story progresses. Though we may favor Nick over Tom, Nick’s failure to sufficiently punish Tom when the degree of his selfishness first becomes apparent tempers any positive feelings we may have toward him. As Flesch points out, “altruism could not sustain an evolutionarily stable system without the contribution of altruistic punishers to punish the free-riders who would flourish in a population of purely benevolent altruists” (66).

On the other hand, through the very act of telling the story, the narrator may be attempting to rectify his earlier moral complacence. According to Flesch’s model of the dynamics of fiction, “The story tells a story of punishment; the story punishes as story; the storyteller represents him- or herself as an altruistic punisher by telling it” (83). However, many readers of Gatsby probably find Nick’s belated punishment insufficient, and if they fail to see the novel as a comment on the real injustice Fitzgerald saw going on around him they would be both confused and disappointed by the way the story ends.

Read part 2

Read More
Dennis Junk Dennis Junk

I am Jack’s Raging Insomnia: The Tragically Overlooked Moral Dilemma at the Heart of Fight Club

There’s a lot of weird theorizing about what the movie Fight Club is really about and why so many men find it appealing. The answer is actually pretty simple: the narrator can’t sleep because his job has him doing something he knows is wrong, but he’s so emasculated by his consumerist obsessions that he won’t risk confronting his boss and losing his job. He needs someone to teach him to man up, so he creates Tyler Durden. Then Tyler gets out of control.

Image by Canva’s Magic Media

[This essay is a brief distillation of ideas explored in much greater depth in Hierarchies in Hell and Leaderless Fight Clubs: Altruism, Narrative Interest, and the Adaptive Appeal of Bad Boys, my master’s thesis]

If you were to ask one of the millions of guys who love the movie Fight Club what the story is about, his answer would most likely emphasize the violence. He might say something like, “It’s about men returning to their primal nature and getting carried away when they find out how good it feels.” Actually, this is an answer I would expect from a guy with exceptional insight. A majority would probably just say it’s about a bunch of guys who get together to beat the crap out of each other and pull a bunch pranks. Some might remember all the talk about IKEA and other consumerist products. Our insightful guy may even connect the dots and explain that consumerism somehow made the characters in the movie feel emasculated, and so they had to resort to fighting and vandalism to reassert their manhood. But, aside from ensuring they would know what a duvet is—“It’s a fucking blanket”—what is it exactly about shopping for household décor and modern conveniences that makes men less manly?

Maybe Fight Club is just supposed to be fun, with all the violence, and the weird sex scene with Marla, and all the crazy mischief the guys get into, but also with a few interesting monologues and voiceovers to hint at deeper meanings. And of course there’s Tyler Durden—fearless, clever, charismatic, and did you see those shredded abs? Not only does he not take shit from anyone, he gets a whole army to follow his lead, loyal to the death. On the other hand, there’s no shortage of characters like this in movies, and if that’s all men liked about Fight Club they wouldn’t sit through all the plane flights, support groups, and soap-making. It just may be that, despite the rarity of fans who can articulate what they are, the movie actually does have profound and important resonances.

If you recall, the Edward Norton character, whom I’ll call Jack (following the convention of the script), decides that his story should begin with the advent of his insomnia. He goes to the doctor but is told nothing is wrong with him. His first night’s sleep comes only after he goes to a support group and meets Bob, he of the “bitch tits,” and cries a smiley face onto his t-shirt. But along comes Marla who like Jack is visiting support groups but is not in fact recovering, sick, or dying. She is another tourist. As long as she's around, he can’t cry, and so he can’t sleep. Soon after Jack and Marla make a deal to divide the group meetings and avoid each other, Tyler Durden shows up and we’re on our way to Fight Clubs and Project Mayhem. Now, why the hell would we accept these bizarre premises and continue watching the movie unless at some level Jack’s difficulties, as well as their solutions, make sense to us?

So why exactly was it that Jack couldn’t sleep at night? The simple answer, the one that Tyler gives later in the movie, is that he’s unhappy with his life. He hates his job. Something about his “filing cabinet” apartment rankles him. And he’s alone. Jack’s job is to fly all over the country to investigate accidents involving his company’s vehicles and to apply “the formula.” I’m going to quote from Chuck Palahniuk’s book:

You take the population of vehicles in the field (A) and multiply it by the probable rate of failure (B), then multiply the result by the average cost of an out-of-court settlement (C).

A times B times C equals X. This is what it will cost if we don’t initiate a recall.

If X is greater than the cost of a recall, we recall the cars and no one gets hurt.

If X is less than the cost of a recall, then we don’t recall (30).

Palahniuk's inspiration for Jack's job was an actual case involving the Ford Pinto. What this means is that Jack goes around trying to protect his company's bottom line to the detriment of people who drive his company's cars. You can imagine the husband or wife or child or parent of one of these accident victims hearing about this job and asking Jack, "How do you sleep at night?"

Going to support groups makes life seem pointless, short, and horrible. Ultimately, we all have little control over our fates, so there's no good reason to take responsibility for anything. When Jack burst into tears as Bob pulls his face into his enlarged breasts, he's relinquishing all accountability; he's, in a sense, becoming a child again. Accordingly, he's able to sleep like a baby. When Marla shows up, not only is he forced to confront the fact that he's healthy and perfectly able to behave responsibly, but he is also provided with an incentive to grow up because, as his fatuous grin informs us, he likes her. And, even though the support groups eventually fail to assuage his guilt, they do inspire him with the idea of hitting bottom, losing all control, losing all hope.

Here’s the crucial point: If Jack didn't have to worry about losing his apartment, or losing all his IKEA products, or losing his job, or falling out of favor with his boss, well, then he would be free to confront that same boss and tell him what he really thinks of the operation that has supported and enriched them both. Enter Tyler Durden, who systematically turns all these conditionals into realities. In game theory terms, Jack is both a 1st order and a 2nd order free rider because he both gains at the expense of others and knowingly allows others to gain in the same way. He carries on like this because he's more motivated by comfort and safety than he is by any assurance that he's doing right by other people.

This is where Jack being of "a generation of men raised by women" becomes important (50). Fathers and mothers tend to treat children differently. A study that functions well symbolically in this context examined the ways moms and dads tend to hold their babies in pools. Moms hold them facing themselves. Dads hold them facing away. Think of the way Bob's embrace of Jack changes between the support group and the fight club. When picked up by moms, babies breathing and heart-rates slow. Just the opposite happens when dads pick them up--they get excited. And if you inventory the types of interactions that go on between the two parents it's easy to see why.

Not only do dads engage children in more rough-and-tumble play; they are also far more likely to encourage children to take risks. In one study, fathers told they'd have to observe their child climbing a slope from a distance making any kind of rescue impossible in the event of a fall set the slopes at a much steeper angle than mothers in the same setup.

Fight Club isn't about dominance or triumphalism or white males' reaction to losing control; it's about men learning that they can't really live if they're always playing it safe. Jack actually says at one point that winning or losing doesn't much matter. Indeed, one of the homework assignments Tyler gives everyone is to start a fight and lose. The point is to be willing to risk a fight when it's necessary--i.e. when someone attempts to exploit or seduce you based on the assumption that you'll always act according to your rational self-interest.

And the disturbing truth is that we are all lulled into hypocrisy and moral complacency by the allures of consumerism. We may not be "recall campaign coordinators" like Jack. But do we know or care where our food comes from? Do we know or care how our soap is made? Do we bother to ask why Disney movies are so devoid of the gross mechanics of life? We would do just about anything for comfort and safety. And that is precisely how material goods and material security have emasculated us. It's easy to imagine Jack's mother soothing him to sleep some night, saying, "Now, the best thing to do, dear, is to sit down and talk this out with your boss."

There are two scenes in Fight Club that I can't think of any other word to describe but sublime. The first is when Jack finally confronts his boss, threatening to expose the company's practices if he is not allowed to leave with full salary. At first, his boss reasons that Jack's threat is not credible, because bringing his crimes to light would hurt Jack just as much. But the key element to what game theorists call altruistic punishment is that the punisher is willing to incur risks or costs to mete out justice. Jack, having been well-fathered, as it were, by Tyler, proceeds to engage in costly signaling of his willingness to harm himself by beating himself up, literally. In game theory terms, he's being rationally irrational, making his threat credible by demonstrating he can't be counted on to pursue his own rational self-interest. The money he gets through this maneuver goes, of course, not into anything for Jack, but into Fight Club and Project Mayhem.

The second sublime scene, and for me the best in the movie, is the one in which Jack is himself punished for his complicity in the crimes of his company. How can a guy with stitches in his face and broken teeth, a guy with a chemical burn on his hand, be punished? Fittingly, he lets Tyler get them both in a car accident. At this point, Jack is in control of his life, he's no longer emasculated. And Tyler flees.

One of the confusing things about the movie is that it has two overlapping plots. The first, which I've been exploring up to this point, centers on Jack's struggle to man up and become an altruistic punisher. The second is about the danger of violent reactions to the murder machine of consumerism. The male ethic of justice through violence can all too easily morph into fascism. And so, once Jack has created this father figure and been initiated into manhood by him, he then has to reign him in--specifically, he has to keep him from killing Marla. This second plot entails what anthropologist Christopher Boehm calls a "domination episode," in which an otherwise egalitarian group gets taken over by a despot who must then be defeated. Interestingly, only Jack knows for sure how much authority Tyler has, because Tyler seemingly undermines that authority by giving contradictory orders. But by now Jack is well schooled on how to beat Tyler--pretty much the same way he beat his boss.

It's interesting to think about possible parallels between the way Fight Club ends and what happened a couple years later on 9/11. The violent reaction to the criminal excesses of consumerism and capitalism wasn't, as it actually occurred, homegrown. And it wasn't inspired by any primal notion of manhood but by religious fanaticism. Still, in the minds of the terrorists, the attacks were certainly a punishment, and there's no denying the cost to the punishers.

Also read:
WHAT MAKES "WOLF HALL" SO GREAT?

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

THE ADAPTIVE APPEAL OF BAD BOYS

Read More