READING SUBTLY
This
was the domain of my Blogger site from 2009 to 2018, when I moved to this domain and started
The Storytelling Ape
. The search option should help you find any of the old posts you're looking for.
Stories, Social Proof, & Our Two Selves
Robert Cialdini describes the phenomenon of social proof, whereby we look to how others are responding to something before we form an opinion ourselves. What are the implications of social proof for our assessments of literature? Daniel Kahneman describes two competing “selves,” the experiencing self and the remembering self. Which one should we trust to let us know how we truly feel about a story?
You’ll quickly come up with a justification for denying it, but your response to a story is influenced far more by other people’s responses to it than by your moment-to-moment experience of reading or watching it. The impression that we either enjoy an experience or we don’t, that our enjoyment or disappointment emerges directly from the scenes, sensations, and emotions of the production itself, results from our cognitive blindness to several simultaneously active processes that go into our final verdict. We’re only ever aware of the output of the various algorithms, never the individual functions.
None of us, for instance, directly experiences the operation of what psychologist and marketing expert Robert Cialdini calls social proof, but its effects on us are embarrassingly easy to measure. Even the way we experience pain depends largely on how we perceive others to be experiencing it. Subjects receiving mild shocks not only report them to be more painful when they witness others responding to them more dramatically, but they also show physiological signs of being in greater distress.
Cialdini opens the chapter on social proof in his classic book Influence: Science and Practice by pointing to the bizarre practice of setting television comedies to laugh tracks. Most people you talk to will say canned laughter is annoying—and they’ll emphatically deny the mechanically fake chuckles and guffaws have any impact on how funny the jokes seem to them. The writers behind those jokes, for their part, probably aren’t happy about the implicit suggestion that their audiences need to be prompted to laugh at the proper times. So why do laugh tracks accompany so many shows? “What can it be about canned laughter that is so attractive to television executives?” Cialdini asks.
Why are these shrewd and tested people championing a practice that their potential watchers find disagreeable and their most creative talents find personally insulting? The answer is both simple and intriguing: They know what the research says. (98)
As with all the other “weapons of influence” Cialdini writes about in the book, social proof seems as obvious to people as it is dismissible. “I understand how it’s supposed to work,” we all proclaim, “but you’d have to be pretty stupid to fall for it.” And yet it still works—and it works on pretty much every last one of us. Cialdini goes on to discuss the finding that even suicide rates increase after a highly publicized story of someone killing themselves. The simple, inescapable reality is that when we see someone else doing something, we become much more likely to do it ourselves, whether it be writhing in genuine pain, laughing in genuine hilarity, or finding life genuinely intolerable.
Another factor that complicates our responses to stories is that, unlike momentary shocks or the telling of jokes, they usually last long enough to place substantial demands on working memory. Movies last a couple hours. Novels can take weeks. What this means is that when we try to relate to someone else what we thought of a movie or a book we’re relying on a remembered abstraction as opposed to a real-time recording of how much we enjoyed the experience. In his book Thinking, Fast and Slow, Daniel Kahneman suggests that our memories of experiences can diverge so much from our feelings at any given instant while actually having those experiences that we effectively have two selves: the experiencing self and the remembering self. To illustrate, he offers the example of a man who complains that a scratch at the end of a disc of his favorite symphony ruined the listening experience for him.“But the experience was not actually ruined, only the memory of it,” Kahneman points out. “The experiencing self had had an experience that was almost entirely good, and the bad end could not undo it, because it had already happened” (381). But the distinction usually only becomes apparent when the two selves disagree—and such disagreements usually require some type of objective recording to discover. Kahneman explains,
Confusing experience with the memory of it is a compelling cognitive illusion—and it is the substitution that makes us believe a past experience can be ruined. The experiencing self does not have a voice. The remembering self is sometimes wrong, but it is the one that keeps score and governs what we learn from living, and it is the one that makes decisions. What we learn from the past is to maximize the qualities of our future memories, not necessarily of our future experiences. This is the tyranny of the remembering self. (381)
Kahneman suggests the priority we can’t help but give to the remembering self explains why tourists spend so much time taking pictures. The real objective of a vacation is not to have a pleasurable or fun experience; it’s to return home with good vacation stories.
Kahneman reports the results of a landmark study he designed with Don Redelmeier that compared moment-to-moment pain recordings of men undergoing colonoscopies to global pain assessments given by the patients after the procedure. The outcome demonstrated that the remembering self was remarkably unaffected by the duration of the procedure or the total sum of pain experienced, as gauged by adding up the scores given moment-to-moment during the procedure. Men who actually experienced more pain nevertheless rated the procedure as less painful when the discomfort tapered off gradually as opposed to dropping off precipitously after reaching a peak. The remembering self is reliably guilty of what Kahneman calls “duration neglect,” and it assesses experiences based on a “peak-end rule,” whereby the “global retrospective rating” will be “well predicted by the average level of pain reported at the worst moment of the experience and at its end” (380). Duration neglect and the peak-end rule probably account for the greater risk of addiction for users of certain drugs like heroine or crystal meth, which result in rapid, intense highs and precipitous drop-offs, as opposed to drugs like marijuana whose effects are longer-lasting but less intense.
We’ve already seen that pain in real time can be influenced by how other people are responding to it, and we can probably extrapolate and assume that the principle applies to pleasurable experiences as well. How does the divergence between experience and memory factor into our response to stories as expressed by our decisions about further reading or viewing, or in things like reviews or personal recommendations? For one thing, we can see that most good stories are structured in a way that serves not so much as a Jamesian “direct impression of life,”i.e. as reports from the experiencing self, but much more like the tamed abstractions Stevenson described in his “Humble Remonstrance” to James. As Kahneman explains,
A story is about significant events and memorable moments, not about time passing. Duration neglect is normal in a story, and the ending often defines its character. The same core features appear in the rules of narratives and in the memories of colonoscopies, vacations, and films. This is how the remembering self works: it composes stories and keeps them for future reference. (387)
Now imagine that you’re watching a movie in a crowded theater. Are you influenced by the responses of your fellow audience members? Are you more likely to laugh if everyone else is laughing, wince if everyone else is wincing, cheer if everyone else is cheering? These are the effects on your experiencing self. What happens, though, in the hours and days and weeks after the movie is over—or after you’re done reading the book? Does your response to the story start to become intertwined with and indistinguishable from the cognitive schema you had in place before ever watching or reading it? Are your impressions influenced by the opinions of critics or friends whose opinions you respect? Do you give a celebrated classic the benefit of the doubt, assuming it has some merit even if you enjoyed it much less than some less celebrated work? Do you read into it virtues whose real source may be external to the story itself? Do you miss virtues that actually are present in less celebrated stories?
Taken to its extreme, this focus on social proof leads to what’s known as social constructivism. In the realm of stories, this would be the idea that there are no objective criteria at all with which to assess merit; it’s all based on popular opinion or the dictates of authorities. Much of the dissatisfaction with the so-called canon is based on this type of thinking. If we collectively decide some work of literature is really good and worth celebrating, the reasoning goes, then it magically becomes really good and worth celebrating. There’s an undeniable kernel of truth to this—and there’s really no reason to object to the idea that one of the things that makes a work of art attention-worthy is that a lot of people are attending to it. Art serves a social function after all; part of the fun comes from sharing the experience and having conversations about it. But I personally can’t credit the absolutist version of social constructivism. I don’t think you’re anything but a literary tourist until you can make a convincing case for why a few classics don’t deserve the distinction—even though I acknowledge that any such case will probably be based largely on the ideas of other people.
The research on the experiencing versus the remembering self also suggests a couple criteria we can apply to our assessments of stories so that they’re more meaningful to people who haven’t been initiated into the society and culture of highbrow literature. Too often, the classics are dismissed as works only English majors can appreciate. And too often, they're written in a way that justifies that dismissal. One criterion should be based on how well the book satisfies the experiencing self: I propose that a story should be considered good insofar as it induces a state of absorption. You forget yourself and become completely immersed in the plot. Mihaly Csikszentmihalyi calls this state flow, and has found that the more time people spend in it the happier and more fulfilled they tend to be. But the total time a reader or viewer spends in a state of flow will likely be neglected if the plot never reaches a peak of intensity, or if it ends on note of tedium. So the second criterion should be how memorable the story is. Assessments based on either of these criteria are of course inevitably vulnerable to social proof and idiosyncratic factors of the individual audience member (whether I find Swann’s Way tedious or absorbing depends on how much sleep and caffeine I’ve had). And yet knowing what the effects are that make for a good aesthetic experience, in real time and in our memories, can help us avoid the trap of merely academic considerations. And knowing that our opinions will always be somewhat contaminated by outside influences shouldn’t keep us from trying to be objective any more than knowing that surgical theaters can never be perfectly sanitized should keep doctors from insisting they be as well scrubbed and scoured as possible.
Also read:
LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME
And:
And:
The Mental Illness Zodiac: Why the DSM 5 Won't Be Anything But More Pseudoscience
That the diagnostic categories are necessarily ambiguous and can’t be tied to any objective criteria like biological markers has been much discussed, as have the corruptions of the mental health industry, including clinical researchers who make their livings treating the same disorders they lobby to have included in the list of official diagnoses. What’s not being discussed, however, is the propensity in humans to take on roles, to play parts, even tragic ones, even horrific ones, without being able to recognize they’re doing so.
Thinking you can diagnose psychiatric disorders using checklists of symptoms means taking for granted a naïve model of the human mind and human behavior. How discouraging to those in emotional distress, or to those doubting their own sanity, that the guides they turn to for help and put their faith in to know what’s best for them embrace this model. The DSM has taken it for granted since its inception, and the latest version, the DSM 5, due out next year, despite all the impediments to practical usage it does away with, despite all the streamlining, and despite all the efforts to adhere to common sense, only perpetuates the mistake. That the diagnostic categories are necessarily ambiguous and can’t be tied to any objective criteria like biological markers has been much discussed, as have the corruptions of the mental health industry, including pharmaceutical companies’ reluctance to publish failed trials for their blockbuster drugs, and clinical researchers who make their livings treating the same disorders they lobby to have included in the list of official diagnoses. Indeed, there’s good evidence that prognoses for mental disorders have actually gotten worse over the past century. What’s not being discussed, however, is the propensity in humans to take on roles, to play parts, even tragic ones, even horrific ones, without being able to recognize they’re doing so.
In his lighthearted, mildly satirical but severely important book on self-improvement 59 Seconds: Change Your Life in Under a Minute, psychologist Richard Wiseman describes an experiment he conducted for the British TV show The People Watchers. A group of students spending an evening in a bar with their friends was given a series of tests, and then they were given access to an open bar. The tests included memorizing a list of numbers, walking along a line on the floor, and catching a ruler dropped by experimenters as quickly as possible. Memory, balance, and reaction time—all areas our performance diminishes in predictably as we drink. The outcomes of the tests were well in-keeping with expectation as they were repeated over the course of the evening. All the students did progressively worse the more they drank. And the effects of the alcohol were consistent throughout the entire group of students. It turns out, however, that only half of them were drinking alcohol.
At the start of the study, Wiseman had given half the participants a blue badge and the other half a red badge. The bartenders poured regular drinks for everyone with red badges, but for those with blue ones they made drinks which looked, smelled, and tasted like their alcoholic counterparts but were actually non-alcoholic. Now, were the students with the blue badges faking their drunkenness? They may have been hamming it for the cameras, but that would be true of the ones who were actually drinking too. What they were doing instead was taking on the role—you might even say taking on the symptoms—of being drunk. As Wiseman explains,
Our participants believed that they were drunk, and so they thought and acted in a way that was consistent with their beliefs. Exactly the same type of effect has emerged in medical experiments when people exposed to fake poison ivy developed genuine rashes, those given caffeine-free coffee became more alert, and patients who underwent a fake knee operation reported reduced pain from their “healed” tendons. (204)
After being told they hadn’t actually consumed any alcohol, the students in the blue group “laughed, instantly sobered up, and left the bar in an orderly and amused fashion.” But not all the natural role-playing humans engage in is this innocuous and short-lived.
In placebo studies like the one Wiseman conducted, participants are deceived. You could argue that actually drinking a convincing replica of alcohol or taking a realistic-looking pill is the important factor behind the effects. People who seek treatment for psychiatric disorders aren’t tricked in this way; so what would cause them to take on the role associated with, say, depression, or bipolar? But plenty of research shows that pills or potions aren’t necessary. We take on different roles in different settings and circumstances all the time. We act much differently at football games and rock concerts than we do at work or school. These shifts are deliberate, though, and we’re aware of them, at least to some degree, when they occur. But many cues are more subtle. It turns out that just being made aware of the symptoms of a disease can make you suspect that you have it. What’s called Medical Student Syndrome afflicts those studying both medical and psychiatric diagnoses. For the most part, you either have a biological disease or you don’t, so the belief that you have one is contingent on the heightened awareness that comes from studying the symptoms. But is there a significant difference between believing you’re depressed and having depression? There answer, according to check-list diagnosis, is no.
In America, we all know the symptoms of depression because we’re bombarded with commercials, like the one that uses squiggly circle faces to explain that it’s caused by a deficit of the neurotransmitter serotonin—a theory that had already been ruled out by the time that commercial began to air. More insidious though are the portrayals of psychiatric disorders in movies, TV series, or talk shows—more insidious because they embed the role-playing instructions in compelling stories. These shows profess to be trying to raise awareness so more people will get help to end their suffering. They profess to be trying to remove the stigma so people can talk about their problems openly. They profess to be trying to help people cope. But, from a perspective of human behavior that acknowledges the centrality of role-playing to our nature, all these shows are actually doing is shilling for the mental health industry, and they are probably helping to cause much of the suffering they claim to be trying to assuage.
Multiple Personality Disorder, or Dissociative Identity Disorder as it’s now called, was an exceedingly rare diagnosis until the late 1970s and early 1980s when its incidence spiked drastically. Before the spike, there were only ever around a hundred cases. Between 1985 and 1995, there were around 40,000 new cases. What happened? There was a book and a miniseries called Sybil starring Sally Field that aired in 1977. Much of the real-life story on which Sybil was based has been cast into doubt through further investigation (or has been shown to be completely fabricated). But if you’re one to give credence to the validity of the DID diagnosis (and you shouldn’t), then we can look at another strange behavioral phenomenon whose incidence spiked after a certain movie hit the box offices in the 1970’s. Prior to the release of The Exorcist, the Catholic church had pretty much consigned the eponymous ritual to the dustbins of history. Lately, though, they’ve had to dust it off.
The Skeptic’s Dictionary says of a TV series devoted to the exorcism ritual, or the play rather, on the Sci-Fi channel,
The exorcists' only prop is a Bible, which is held in one hand while they talk down the devil in very dramatic episodes worthy of Jerry Springer or Jenny Jones. The “possessed” could have been mentally ill, actors, mentally ill actors, drug addicts, mentally ill drug addicts, or they may have been possessed, as the exorcists claimed. All the participants shown being exorcized seem to have seen the movie “The Exorcist” or one of the sequels. They all fell into the role of husky-voiced Satan speaking from the depths, who was featured in the film. The similarities in speech and behavior among the “possessed” has led some psychologists such as Nicholas Spanos to conclude that both “exorcist” and “possessed” are engaged in learned role-playing.
If people can somehow inadvertently fall into the role of having multiple personalities or being possessed by demons, it’s not hard to imagine them hearing about, say, bipolar, briefly worrying that they may have some of the symptoms, and then subsequently taking on the role, even the identity of someone battling bipolar disorder.
Psychologist Dan McAdams theorizes that everyone creates his or her own “personal myth,” which serves to give life meaning and trajectory. The character we play in our own myth is what we recognize as our identity, what we think of when we try to answer the question “Who am I?” in all its profundity. But, as McAdams explains in The Stories We Live By: Personal Myths and the Making of the Self,
Stories are less about facts and more about meanings. In the subjective and embellished telling of the past, the past is constructed—history is made. History is judged to be true or false not solely with respect to its adherence to empirical fact. Rather, it is judged with respect to such narrative criteria as “believability” and “coherence.” There is a narrative truth in life that seems quite removed from logic, science, and empirical demonstration. It is the truth of a “good story.” (28-9)
The problem when it comes to diagnosing psychiatric disorders is that the checklist approach tries to use objective, scientific criteria, when the only answers they’ll ever get will be in terms of narrative criteria. But why, if people are prone to taking on roles, wouldn’t they take on something pleasant, like kings or princesses?
Since our identities are made up of the stories we tell about ourselves—even to ourselves—it’s important that those stories be compelling. And if nothing ever goes wrong in the stories we tell, well, they’d be pretty boring. As Jonathan Gottschall writes in The Storytelling Animal: How Stories Make Us Human,
This need to see ourselves as the striving heroes of our own epics warps our sense of self. After all, it’s not easy to be a plausible protagonist. Fiction protagonists tend to be young, attractive, smart, and brave—all the things that most of us aren’t. Fiction protagonists usually live interesting lives that are marked by intense conflict and drama. We don’t. Average Americans work retail or cubicle jobs and spend their nights watching protagonists do interesting things on television. (171)
Listen to the ways talk show hosts like Oprah talk about mental disorders, and count how many times in an episode she congratulates the afflicted guests for their bravery in keeping up the struggle. Sometimes, the word hero is even bandied about. Troublingly, the people who cast themselves as heroes spreading awareness, countering stigmas, and helping people cope even like to do really counterproductive things like publishing lists of celebrities who supposedly suffer from the disorder in question. Think you might have bipolar? Kay Redfield Jameson thinks you’re in good company. In her book Touched By Fire, she suggests everyone from rocker Curt Cobain to fascist Mel Gibson is in that same boat-full of heroes.
The reason medical researchers insist a drug must not only be shown to make people feel better but must also be shown to work better than a placebo is that even a sham treatment will make people report feeling better between 60 and 90% of the time, depending on several well-documented factors. What psychiatrists fail to acknowledge is that the placebo dynamic can be turned on its head—you can give people illnesses, especially mental illnesses, merely by suggesting they have the symptoms—or even by increasing their awareness of and attention to those symptoms past a certain threshold. If you tell someone a fact about themselves, they’ll usually believe it, especially if you claim a test, or an official diagnostic manual allowed you to determine the fact. This is how frauds convince people they’re psychics. An experiment you can do yourself involves giving horoscopes to a group of people and asking how true they ring. After most of them endorse their reading, reveal that you changed the labels and they all in fact read the wrong sign’s description.
Psychiatric diagnoses, to be considered at all valid, would need to be double-blind, just like drug trials: the patient shouldn’t know the diagnosis being considered; the rater shouldn’t know the diagnosis being considered; only a final scorer, who has no contact with the patient, should determine the diagnosis. The categories themselves are, however, equally problematic. In order to be properly established as valid, they need to have predictive power. Trials would have to be conducted in which subjects assigned to the prospective categories using double-blind protocols were monitored for long periods of time to see if their behavior adheres to what’s expected of the disorder. For instance, bipolar is supposedly marked by cyclical mood swings. Where are the mood diary studies? (The last time I looked for them was six months ago, so if you know of any, please send a link.) Smart phones offer all kinds of possibilities for monitoring and recording behaviors. Why aren’t they being used to do actual science on mental disorders?
To research the role-playing dimension of mental illness, one (completely unethical) approach would be to design from scratch a really bizarre disorder, publicize its symptoms, maybe make a movie starring Mel Gibson, and monitor incidence rates. Let’s call it Puppy Pregnancy Disorder. We all know dog saliva is chock-full of gametes, right? So, let’s say the disorder is caused when a canine, in a state of sexual arousal of course, bites the victim, thus impregnating her—or even him. Let’s say it affects men too. Wouldn’t that be funny? The symptoms would be abdominal pain, and something just totally out there, like, say, small pieces of puppy feces showing up in your urine. Now, this might be too outlandish, don’t you think? There’s no way we could get anyone to believe this. Unfortunately, I didn’t really make this up. And there are real people in India who believe they have Puppy Pregnancy Disorder.
Also read:
THE STORYTELLING ANIMAL: A LIGHT READ WITH WEIGHTY IMPLICATIONS
And:
THE SELF-TRANSCENDENCE PRICE TAG: A REVIEW OF ALEX STONE'S FOOLING HOUDINI
Why Shakespeare Nauseated Darwin: A Review of Keith Oatley's "Such Stuff as Dreams"
Does practicing science rob one of humanity? Why is it that, if reading fiction trains us to take the perspective of others, English departments are rife with pettiness and selfishness? Keith Oately is trying to make the study of literature more scientific, and he provides hints to these riddles and many others in his book “Such Stuff as Dreams.”
Late in his life, Charles Darwin lost his taste for music and poetry. “My mind seems to have become a kind of machine for grinding general laws out of large collections of facts,” he laments in his autobiography, and for many of us the temptation to place all men and women of science into a category of individuals whose minds resemble machines more than living and emotionally attuned organs of feeling and perceiving is overwhelming. In the 21st century, we even have a convenient psychiatric diagnosis for people of this sort. Don’t we just assume Sheldon in The Big Bang Theory has autism, or at least the milder version of it known as Asperger’s? It’s probably even safe to assume the show’s writers had the diagnostic criteria for the disorder in mind when they first developed his character. Likewise, Dr. Watson in the BBC’s new and obscenely entertaining Sherlock series can’t resist a reference to the quintessential evidence-crunching genius’s own supposed Asperger’s.
In Darwin’s case, however, the move away from the arts couldn’t have been due to any congenital deficiency in his finer human sentiments because it occurred only in adulthood. He writes,
I have said that in one respect my mind has changed during the last twenty or thirty years. Up to the age of thirty, or beyond it, poetry of many kinds, such as the works of Milton, Gray, Byron, Wordsworth, Coleridge, and Shelley, gave me great pleasure, and even as a schoolboy I took intense delight in Shakespeare, especially in the historical plays. I have also said that formerly pictures gave me considerable, and music very great delight. But now for many years I cannot endure to read a line of poetry: I have tried lately to read Shakespeare, and found it so intolerably dull that it nauseated me. I have also almost lost my taste for pictures or music. Music generally sets me thinking too energetically on what I have been at work on, instead of giving me pleasure.
We could interpret Darwin here as suggesting that casting his mind too doggedly into his scientific work somehow ruined his capacity to appreciate Shakespeare. But, like all thinkers and writers of great nuance and sophistication, his ideas are easy to mischaracterize through selective quotation (or, if you’re Ben Stein or any of the other unscrupulous writers behind creationist propaganda like the pseudo-documentary Expelled, you can just lie about what he actually wrote).
One of the most charming things about Darwin is that his writing is often more exploratory than merely informative. He writes in search of answers he has yet to discover. In a wider context, the quote about his mind becoming a machine, for instance, reads,
This curious and lamentable loss of the higher aesthetic tastes is all the odder, as books on history, biographies, and travels (independently of any scientific facts which they may contain), and essays on all sorts of subjects interest me as much as ever they did. My mind seems to have become a kind of machine for grinding general laws out of large collections of facts, but why this should have caused the atrophy of that part of the brain alone, on which the higher tastes depend, I cannot conceive. A man with a mind more highly organised or better constituted than mine, would not, I suppose, have thus suffered; and if I had to live my life again, I would have made a rule to read some poetry and listen to some music at least once every week; for perhaps the parts of my brain now atrophied would thus have been kept active through use. The loss of these tastes is a loss of happiness, and may possibly be injurious to the intellect, and more probably to the moral character, by enfeebling the emotional part of our nature.
His concern for his lost aestheticism notwithstanding, Darwin’s humanism, his humanity, radiates in his writing with a warmth that belies any claim about thinking like a machine, just as the intelligence that shows through it gainsays his humble deprecations about the organization of his mind.
In this excerpt, Darwin, perhaps inadvertently, even manages to put forth a theory of the function of art. Somehow, poetry and music not only give us pleasure and make us happy—enjoying them actually constitutes a type of mental exercise that strengthens our intellect, our emotional awareness, and even our moral character. Novelist and cognitive psychologist Keith Oatley explores this idea of human betterment through aesthetic experience in his book Such Stuff as Dreams: The Psychology of Fiction. This subtitle is notably underwhelming given the long history of psychoanalytic theorizing about the meaning and role of literature. However, whereas psychoanalysis has fallen into disrepute among scientists because of its multiple empirical failures and a general methodological hubris common among its practitioners, the work of Oatley and his team at the University of Toronto relies on much more modest, and at the same time much more sophisticated, scientific protocols. One of the tools these researchers use, The Reading the Mind in the Eyes Test, was in fact first developed to research our new category of people with machine-like minds. What the researchers find bolsters Darwin’s impression that art, at least literary art, functions as a kind of exercise for our faculty of understanding and relating to others.
Reasoning that “fiction is a kind of simulation of selves and their vicissitudes in the social world” (159), Oatley and his colleague Raymond Mar hypothesized that people who spent more time trying to understand fictional characters would be better at recognizing and reasoning about other, real-world people’s states of mind. So they devised a test to assess how much fiction participants in their study read based on how well they could categorize a long list of names according to which ones belonged to authors of fiction, which to authors of nonfiction, and which to non-authors. They then had participants take the Mind-in-the-Eyes Test, which consists of matching close-up pictures of peoples’ eyes with terms describing their emotional state at the time they were taken. The researchers also had participants take the Interpersonal Perception Test, which has them answer questions about the relationships of people in short video clips featuring social interactions. An example question might be “Which of the two children, or both, or neither, are offspring of the two adults in the clip?” (Imagine Sherlock Holmes taking this test.) As hypothesized, Oatley writes, “We found that the more fiction people read, the better they were at the Mind-in-the-Eyes Test. A similar relationship held, though less strongly, for reading fiction and the Interpersonal Perception Test” (159).
One major shortcoming of this study is that it fails to establish causality; people who are naturally better at reading emotions and making sound inferences about social interactions may gravitate to fiction for some reason. So Mar set up an experiment in which he had participants read either a nonfiction article from an issue of the New Yorker or a work of short fiction chosen to be the same length and require the same level of reading skills. When the two groups then took a test of social reasoning, the ones who had read the short story outperformed the control group. Both groups also took a test of analytic reasoning as a further control; on this variable there was no difference in performance between the groups. The outcome of this experiment, Oatley stresses, shouldn’t be interpreted as evidence that reading one story will increase your social skills in any meaningful and lasting way. But reading habits established over long periods likely explain the more significant differences between individuals found in the earlier study. As Oatley explains,
Readers of fiction tend to become more expert at making models of others and themselves, and at navigating the social world, and readers of non-fiction are likely to become more expert at genetics, or cookery, or environmental studies, or whatever they spend their time reading. Raymond Mar’s experimental study on reading pieces from the New Yorker is probably best explained by priming. Reading a fictional piece puts people into a frame of mind of thinking about the social world, and this is probably why they did better at the test of social reasoning. (160)
Connecting these findings to real-world outcomes, Oatley and his team also found that “reading fiction was not associated with loneliness,” as the stereotype suggests, “but was associated with what psychologists call high social support, being in a circle of people whom participants saw a lot, and who were available to them practically and emotionally” (160).
These studies by the University of Toronto team have received wide publicity, but the people who should be the most interested in them have little or no idea how to go about making sense of them. Most people simply either read fiction or they don’t. If you happen to be of the tribe who studies fiction, then you were probably educated in a way that engendered mixed feelings—profound confusion really—about science and how it works. In his review of The Storytelling Animal, a book in which Jonathan Gottschall incorporates the Toronto team’s findings into the theory that narrative serves the adaptive function of making human social groups more cooperative and cohesive, Adam Gopnik sneers,
Surely if there were any truth in the notion that reading fiction greatly increased our capacity for empathy then college English departments, which have by far the densest concentration of fiction readers in human history, would be legendary for their absence of back-stabbing, competitive ill-will, factional rage, and egocentric self-promoters; they’d be the one place where disputes are most often quickly and amiably resolved by mutual empathetic engagement. It is rare to see a thesis actually falsified as it is being articulated.
Oatley himself is well aware of the strange case of university English departments. He cites a report by Willie van Peer on a small study he did comparing students in the natural sciences to students in the humanities. Oatley explains,
There was considerable scatter, but on average the science students had higher emotional intelligence than the humanities students, the opposite of what was expected; van Peer indicts teaching in the humanities for often turning people away from human understanding towards technical analyses of details. (160)
Oatley suggests in a footnote that an earlier study corroborates van Peer’s indictment. It found that high school students who show more emotional involvement with short stories—the type of connection that would engender greater empathy—did proportionally worse on standard academic assessments of English proficiency. The clear implication of these findings is that the way literature is taught in universities and high schools is long overdue for an in-depth critical analysis.
The idea that literature has the power to make us better people is not new; indeed, it was the very idea on which the humanities were originally founded. We have to wonder what people like Gopnik believe the point of celebrating literature is if not to foster greater understanding and empathy. If you either enjoy it or you don’t, and it has no beneficial effects on individuals or on society in general, why bother encouraging anyone to read? Why bother writing essays about it in the New Yorker? Tellingly, many scholars in the humanities began doubting the power of art to inspire greater humanity around the same time they began questioning the value and promise of scientific progress. Oatley writes,
Part of the devastation of World War II was the failure of German citizens, one of the world’s most highly educated populations, to prevent their nation’s slide into Nazism. George Steiner has famously asserted: “We know that a man can read Goethe or Rilke in the evening, that he can play Bach and Schubert, and go to his day’s work at Auschwitz in the morning.” (164)
Postwar literary theory and criticism has, perversely, tended toward the view that literature and language in general serve as a vessel for passing on all the evils inherent in our western, patriarchal, racist, imperialist culture. The purpose of literary analysis then becomes to shift out these elements and resist them. Unfortunately, such accusatory theories leave unanswered the question of why, if literature inculcates oppressive ideologies, we should bother reading it at all. As van Peer muses in the report Oatley cites, “The Inhumanity of the Humanities,”
Consider the ills flowing from postmodern approaches, the “posthuman”: this usually involves the hegemony of “race/class/gender” in which literary texts are treated with suspicion. Here is a major source of that loss of emotional connection between student and literature. How can one expect a certain humanity to grow in students if they are continuously instructed to distrust authors and texts? (8)
Oatley and van Peer point out, moreover, that the evidence for concentration camp workers having any degree of literary or aesthetic sophistication is nonexistent. According to the best available evidence, most of the greatest atrocities were committed by soldiers who never graduated high school. The suggestion that some type of cozy relationship existed between Nazism and an enthusiasm for Goethe runs afoul of recorded history. As Oatley points out,
Apart from propensity to violence, nationalism, and anti-Semitism, Nazism was marked by hostility to humanitarian values in education. From 1933 onwards, the Nazis replaced the idea of self-betterment through education and reading by practices designed to induce as many as possible into willing conformity, and to coerce the unwilling remainder by justified fear. (165)
Oatley also cites the work of historian Lynn Hunt, whose book Inventing Human Rights traces the original social movement for the recognition of universal human rights to the mid-1700s, when what we recognize today as novels were first being written. Other scholars like Steven Pinker have pointed out too that, while it’s hard not to dwell on tragedies like the Holocaust, even atrocities of that magnitude are resoundingly overmatched by the much larger post-Enlightenment trend toward peace, freedom, and the wider recognition of human rights. It’s sad that one of the lasting legacies of all the great catastrophes of the 20th Century is a tradition in humanities scholarship that has the people who are supposed to be the custodians of our literary heritage hell-bent on teaching us all the ways that literature makes us evil.
Because Oatley is a central figure in what we can only hope is a movement to end the current reign of self-righteous insanity in literary studies, it pains me not to be able to recommend Such Stuff as Dreams to anyone but dedicated specialists. Oatley writes in the preface that he has “imagined the book as having some of the qualities of fiction. That is to say I have designed it to have a narrative flow” (x), and it may simply be that this suggestion set my expectations too high. But the book is poorly edited, the prose is bland and often roles over itself into graceless tangles, and a couple of the chapters seem like little more than haphazardly collated reports of studies and theories, none exactly off-topic, none completely without interest, but all lacking any central progression or theme. The book often reads more like an annotated bibliography than a story. Oatley’s scholarly range is impressive, however, bearing not just on cognitive science and literature through the centuries but extending as well to the work of important literary theorists. The book is never unreadable, never opaque, but it’s not exactly a work of art in its own right.
Insofar as Such Stuff as Dreams is organized around a central idea, it is that fiction ought be thought of not as “a direct impression of life,” as Henry James suggests in his famous essay “The Art of Fiction,” and as many contemporary critics—notably James Wood—seem to think of it. Rather, Oatley agrees with Robert Louis Stevenson’s response to James’s essay, “A Humble Remonstrance,” in which he writes that
Life is monstrous, infinite, illogical, abrupt and poignant; a work of art in comparison is neat, finite, self-contained, rational, flowing, and emasculate. Life imposes by brute energy, like inarticulate thunder; art catches the ear, among the far louder noises of experience, like an air artificially made by a discreet musician. (qtd on pg 8)
Oatley theorizes that stories are simulations, much like dreams, that go beyond mere reflections of life to highlight through defamiliarization particular aspects of life, to cast them in a new light so as to deepen our understanding and experience of them. He writes,
Every true artistic expression, I think, is not just about the surface of things. It always has some aspect of the abstract. The issue is whether, by a change of perspective or by a making the familiar strange, by means of an artistically depicted world, we can see our everyday world in a deeper way. (15)
Critics of high-brow literature like Wood appreciate defamiliarization at the level of description; Oatley is suggesting here though that the story as a whole functions as a “metaphor-in-the-large” (17), a way of not just making us experience as strange some object or isolated feeling, but of reconceptualizing entire relationships, careers, encounters, biographies—what we recognize in fiction as plots. This is an important insight, and it topples verisimilitude from its ascendant position atop the hierarchy of literary values while rendering complaints about clichéd plots potentially moot. Didn’t Shakespeare recycle plots after all?
The theory of fiction as a type of simulation to improve social skills and possibly to facilitate group cooperation is emerging as the frontrunner in attempts to explain narrative interest in the context of human evolution. It is to date, however, impossible to rule out the possibility that our interest in stories is not directly adaptive but instead emerges as a byproduct of other traits that confer more immediate biological advantages. The finding that readers track actions in stories with the same brain regions that activate when they witness similar actions in reality, or when they engage in them themselves, is important support for the simulation theory. But the function of mirror neurons isn’t well enough understood yet for us to determine from this study how much engagement with fictional stories depends on the reader's identifying with the protagonist. Oatley’s theory is more consonant with direct and straightforward identification. He writes,
A very basic emotional process engages the reader with plans and fortunes of a protagonist. This is what often drives the plot and, perhaps, keeps us turning the pages, or keeps us in our seat at the movies or at the theater. It can be enjoyable. In art we experience the emotion, but with it the possibility of something else, too. The way we see the world can change, and we ourselves can change. Art is not simply taking a ride on preoccupations and prejudices, using a schema that runs as usual. Art enables us to experience some emotions in contexts that we would not ordinarily encounter, and to think of ourselves in ways that usually we do not. (118)
Much of this change, Oatley suggests, comes from realizing that we too are capable of behaving in ways that we might not like. “I am capable of this too: selfishness, lack of sympathy” (193), is what he believes we think in response to witnessing good characters behave badly.
Oatley’s theory has a lot to recommend it, but William Flesch’s theory of narrative interest, which suggests we don’t identify with fictional characters directly but rather track them and anxiously hope for them to get whatever we feel they deserve, seems much more plausible in the context of our response to protagonists behaving in surprisingly selfish or antisocial ways. When I see Ed Norton as Tyler Durden beating Angel Face half to death in Fight Club, for instance, I don’t think, hey, that’s me smashing that poor guy’s face with my fists. Instead, I think, what the hell are you doing? I had you pegged as a good guy. I know you’re trying not to be as much of a pushover as you used to be but this is getting scary. I’m anxious that Angel Face doesn’t get too damaged—partly because I imagine that would be devastating to Tyler. And I’m anxious lest this incident be a harbinger of worse behavior to come.
The issue of identification is just one of several interesting questions that can lend itself to further research. Oatley and Mar’s studies are not enormous in terms of sample size, and their subjects were mostly young college students. What types of fiction work the best to foster empathy? What types of reading strategies might we encourage students to apply to reading literature—apart from trying to remove obstacles to emotional connections with characters? But, aside from the Big-Bad-Western Empire myth that currently has humanities scholars grooming successive generations of deluded ideologues to be little more than culture vultures presiding over the creation and celebration of Loser Lit, the other main challenge to transporting literary theory onto firmer empirical grounds is the assumption that the arts in general and literature in particular demand a wholly different type of thinking to create and appreciate than the type that goes into the intricate mechanics and intensely disciplined practices of science.
As Oatley and the Toronto team have shown, people who enjoy fiction tend to have the opposite of autism. And people who do science are, well, Sheldon. Interestingly, though, the writers of The Big Bang Theory, for whatever reason, included some contraindications for a diagnosis of autism or Asperger’s in Sheldon’s character. Like the other scientists in the show, he’s obsessed with comic books, which require at least some understanding of facial expression and body language to follow. As Simon Baron-Cohen, the autism researcher who designed the Mind-in-the-Eyes test, explains, “Autism is an empathy disorder: those with autism have major difficulties in 'mindreading' or putting themselves into someone else’s shoes, imagining the world through someone else’s feelings” (137). Baron-Cohen has coined the term “mindblindness” to describe the central feature of the disorder, and many have posited that the underlying cause is abnormal development of the brain regions devoted to perspective taking and understanding others, what cognitive psychologists refer to as our Theory of Mind.
To follow comic book plotlines, Sheldon would have to make ample use of his own Theory of Mind. He’s also given to absorption in various science fiction shows on TV. If he were only interested in futuristic gadgets, as an autistic would be, he could just as easily get more scientifically plausible versions of them in any number of nonfiction venues. By Baron-Cohen’s definition, Sherlock Holmes can’t possibly have Asperger’s either because his ability to get into other people’s heads is vastly superior to pretty much everyone else’s. As he explains in “The Musgrave Ritual,”
You know my methods in such cases, Watson: I put myself in the man’s place, and having first gauged his intelligence, I try to imagine how I should myself have proceeded under the same circumstances.
What about Darwin, though, that demigod of science who openly professed to being nauseated by Shakespeare? Isn’t he a prime candidate for entry into the surprisingly unpopulated ranks of heartless, data-crunching scientists whose thinking lends itself so conveniently to cooptation by oppressors and committers of wartime atrocities? It turns out that though Darwin held many of the same racist views as nearly all educated men of his time, his ability to empathize across racial and class divides was extraordinary. Darwin was not himself a Social Darwinist, a theory devised by Herbert Spencer to justify inequality (which has currency still today among political conservatives). And Darwin was also a passionate abolitionist, as is clear in the following excerpts from The Voyage of the Beagle:
On the 19th of August we finally left the shores of Brazil. I thank God, I shall never again visit a slave-country. To this day, if I hear a distant scream, it recalls with painful vividness my feelings, when passing a house near Pernambuco, I heard the most pitiable moans, and could not but suspect that some poor slave was being tortured, yet knew that I was as powerless as a child even to remonstrate.
Darwin is responding to cruelty in a way no one around him at the time would have. And note how deeply it pains him, how profound and keenly felt his sympathy is.
I was present when a kind-hearted man was on the point of separating forever the men, women, and little children of a large number of families who had long lived together. I will not even allude to the many heart-sickening atrocities which I authentically heard of;—nor would I have mentioned the above revolting details, had I not met with several people, so blinded by the constitutional gaiety of the negro as to speak of slavery as a tolerable evil.
The question arises, not whether Darwin had sacrificed his humanity to science, but why he had so much more humanity than many other intellectuals of his day.
It is often attempted to palliate slavery by comparing the state of slaves with our poorer countrymen: if the misery of our poor be caused not by the laws of nature, but by our institutions, great is our sin; but how this bears on slavery, I cannot see; as well might the use of the thumb-screw be defended in one land, by showing that men in another land suffered from some dreadful disease.
And finally we come to the matter of Darwin’s Theory of Mind, which was quite clearly in no way deficient.
Those who look tenderly at the slave owner, and with a cold heart at the slave, never seem to put themselves into the position of the latter;—what a cheerless prospect, with not even a hope of change! picture to yourself the chance, ever hanging over you, of your wife and your little children—those objects which nature urges even the slave to call his own—being torn from you and sold like beasts to the first bidder! And these deeds are done and palliated by men who profess to love their neighbours as themselves, who believe in God, and pray that His Will be done on earth! It makes one's blood boil, yet heart tremble, to think that we Englishmen and our American descendants, with their boastful cry of liberty, have been and are so guilty; but it is a consolation to reflect, that we at least have made a greater sacrifice than ever made by any nation, to expiate our sin. (530-31)
I suspect that Darwin’s distaste for Shakespeare was borne of oversensitivity. He doesn't say music failed to move him; he didn’t like it because it made him think “too energetically.” And as aesthetically pleasing as Shakespeare is, existentially speaking, his plays tend to be pretty harsh, even the comedies. When Prospero says, "We are such stuff / as dreams are made on" in Act 4 of The Tempest, he's actually talking not about characters in stories, but about how ephemeral and insignificant real human lives are. But why, beyond some likely nudge from his inherited temperament, was Darwin so sensitive? Why was he so empathetic even to those so vastly different from him? After admitting he’d lost his taste for Shakespeare, paintings, and music, he goes to say,
On the other hand, novels which are works of the imagination, though not of a very high order, have been for years a wonderful relief and pleasure to me, and I often bless all novelists. A surprising number have been read aloud to me, and I like all if moderately good, and if they do not end unhappily—against which a law ought to be passed. A novel, according to my taste, does not come into the first class unless it contains some person whom one can thoroughly love, and if a pretty woman all the better.
Also read
STORIES, SOCIAL PROOF, & OUR TWO SELVES
And:
LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME
And:
[Check out the Toronto group's blog at onfiction.ca]
Bedtime Ghost Story for Adults
A couple meets a nice old lady who lives in the apartment behind theirs. Little do they know they’ll end up adopting her cat when she dies. This is the setup to the story a man tells his girlfriend when she asks him to improvise one. It’s good enough to surprise them both.
I had just moved into the place on Berry Street with my girlfriend and her two cats. A very old lady lived in the apartment behind us. She came out to the dumpster while I was breaking down boxes and throwing them in. “Can you take those out?” she asked in her creaky voice. I explained I had nowhere else to put them if I did. “But it gets filled up and I can’t get anything in there,” she complained. I said she could come knock on our door if she ever had to throw something in the dumpster and it was too full. I’d help her.
A couple nights later, just as we were about to go to bed, my girlfriend asked me to tell her a story. When we first started dating, I would improvise elaborate stories at her request—to impress her and because it was fun. I hadn’t done it in a while.
******
“There was a couple who just moved into a new apartment,” I began as we climbed into bed.
“Uh-huh,” she said, already amused.
“And this apartment was at the front part of a really old house, and there was a really old lady who lived in the apartment behind theirs. Well, they got all their stuff moved in and they thought their place was really awesome and everything was going great. And the old lady liked the couple a lot… She liked them because she liked their cat.”
“Oh, they have a cat, huh? You didn’t say anything about a cat.”
“I just did.”
“What color is this cat?”
“Orange.”
“Oh, okay.”
“What happened was that one day the cat went missing and it turned out the cat had wandered to the old lady’s porch and she let it in her apartment. And she really liked it. But the girl was like, ‘Where’s my cat?’ and she went looking for it and got all worried. Finally, she knocked on the old lady’s door and asked if she’d seen it.
“The old lady invited the girl in to give her her cat back and while they were talking the old lady was thinking, wow, I really like this girl and she has a really nice cat and I liked having the cat over here. And the old lady had grown up in New Orleans, so she and her sisters were all into voodoo and hoodoo and spells and stuff. They were witches.”
“Oh man.”
“Yeah, so the old lady was a witch. And since she liked the young girl so much she decided to do something for her, so while she was talking to her she had something in her hand. And she held up her hand and blew it in the girl’s face. It was like water and ashes or something. The girl had no idea what it was and she was really weirded out and like, ‘What the hell did she do that for?’ But she figured it was no big deal. The lady was really old and probably a little dotty she figured. But she still kind of hurried up and got her cat and went home.
“Well, everything was normal until the boyfriend came home, and then the girl was all crazy and had to have sex with him immediately. They ended up having sex all night. And from then on it was like whenever they saw each other they couldn’t help themselves and they were just having sex all the time.”
“Oh boy.”
“Eventually, it was getting out of hand because they were both exhausted all day and they never talked to their friends and they started missing work and stuff. But they were really happy. It was great. So the girl started wondering if maybe the old lady had done something to her when she blew that stuff in her face. And then she thought maybe she should go and ask her, the old lady, if that’s what had happened. And if it was she thought, you know, she should thank her. She thought about all this for a long time, but then she would see the boyfriend and of course after that she would forget everything and eventually she just stopped thinking about it.
“Then one day their cat went missing, their other cat.”
“What color is this one?”
“Black. And, since she found the other cat at the old lady’s before, the girl thought maybe she should go and ask the old lady again. So one day when she was getting home from work she saw the old lady sitting on her porch and she goes up to talk to her. And she’s trying to make small talk and tell the old lady about the cat and ask her if she’s seen it when the old lady turns around and, like, squints and wrinkles her nose and kind of goes like this—looking back—and says, ‘You didn’t even thank me!’ before walking away and going in her door.”
“Ahh.”
“Yeah, and the girl’s all freaked out by it too.”
“Oh!—I’m gonna have to roll over and make sure she’s not out there.”
“Okay… So the girl’s all freaked out, but she’s still like, ‘Where’s my cat?’ So one time after they just had sex for like the umpteenth time she tells her boyfriend we gotta find the cat. And the boyfriend is like, ‘All right, I’m gonna go talk to this old lady and find out what the hell happened to our cat.’”
“Oh! What did you do to Mikey?”
“I didn’t do anything. Just listen… Anyway, he’s determined to find out if the cat’s in this old lady’s apartment. So he goes and knocks on her door and is all polite and everything. But the old lady just says, ‘You didn’t even thank me!’ and slams the door on him. He doesn’t know what else to do at this point so he calls the police, and he tells them that their cat’s missing and the last time, when the other cat was missing, it turned up at the old lady’s house. And he told them the old lady was acting all weird and stuff too.
“But of course the police can’t really do anything because there’s no way anyone knows the cat’s in the old lady’s house and they tell him to just wait and see if maybe the cat ran away or whatever. And the girl’s all upset and the guy’s getting all pissed off and trying to come up with some kind of scheme to get into the old lady’s house.
“–But they never actually get around to doing anything because they’re having so much sex and, even though they still miss the cat and everything, a lot of the time they almost forget about it. And it just goes on like this for a long time with the couple suspicious of the old lady and wondering where their cat is but not being able to do anything.
“And this goes on until one day—when the old lady just mysteriously dies. When the police get to her apartment, sure enough there’s the couple’s black cat.”
“Ooh, Mikey.”
“So the police come and tell the guy, you know, hey, we found your cat, just like you said. And the guy goes and gets the cat and brings it home. But while he’s in the old lady’s apartment he’s wondering the whole time about the spell she put on him and his girlfriend, and he’s a little worried that maybe since she died the spell might be broken. But he gets the cat and takes it home. And when his girlfriend comes home it’s like she gets all excited to see it, but only for like a minute, and then it’s like before and they can’t help themselves. They have to have sex.
“Well, this goes on and on and things get more and more out of hand until both of them lose their jobs, their friends just drift away because they never talk to them, and eventually they can’t pay their rent so they lose their apartment. So they get their cats and as much of their stuff as they can and they go to this spot they know by the river where some of their hippie friends used to camp. And they just live there like before, with their cats, just having sex all the time.
“One night after they just had sex again, they’re sitting by the campfire and the guy says, ‘You know, we lost our jobs and our friends and our apartment, and we’re living in the woods here by the river, and you’d think we’d be pretty miserable. But I think I have everything I need right here.’ He’s thinking about having sex again even as he’s saying this. And he’s like, ‘Really, I’m happy as hell. I don’t remember ever being this happy.’
“And the girl is like, ‘Yeah, me too. I actually kind of like living out here with you.’
“So they’re about to start having sex again when the black cat turns and looks at them and says, ‘And you didn’t even thank me!’”
Also read
THE SMOKING BUDDHA: ANOTHER GHOST STORY FOR ADULTS (AND YOUNG ADULTS TOO)
And
Percy Fawcett’s 2 Lost Cities
David Grann’s “The Lost City of Z,” about Percy Fawcett’s expeditions to find a legendary pre
Columbian city, is an absolute joy to read. But it raises questions about what it is we hope our favorite explorers find in the regrown juggles.
In his surprisingly profound, insanely fun book The Lost City of Z: A Tale of Deadly Obsession in the Amazon, David Grann writes about his visit to a store catering to outdoorspeople in preparation for his trip to research, and to some degree retrace, the last expedition of renowned explorer Percy Fawcett. Grann, a consummate New Yorker, confesses he’s not at all the outdoors type, but once he’s on the trail of a story he does manifest a certain few traits in common with adventurers like Fawcett. Wandering around the store after having been immersed in the storied history of the Royal Geographical Society, Grann observes,
Racks held magazines like Hooked on the Outdoors and Backpacker: The Outdoors at Your Doorstep, which had articles titled “Survive a Bear Attack!” and “America’s Last Wild Places: 31 Ways to Find Solitude, Adventure—and Yourself.” Wherever I turned, there were customers, or “gear heads.” It was as if the fewer opportunities for genuine exploration, the greater the means were for anyone to attempt it, and the more baroque the ways—bungee cording, snowboarding—that people found to replicate the sensation. Exploration, however, no longer seemed aimed at some outward discovery; rather, it was directed inward, to what guidebooks and brochures called “camping and wilderness therapy” and “personal growth through adventure.” (76)
Why do people feel such a powerful attraction to wilderness? And has there really been a shift from outward to inward discovery at the heart of our longings to step away from the paved roads and noisy bustle of civilization? As the element of the extreme makes clear, part of the pull comes from the thrill of facing dangers of one sort or another. But can people really be wired in such a way that many of them are willing to risk dying for the sake of a brief moment of accelerated heart-rate and a story they can lovingly exaggerate into their old age?
The catalogue of dangers Fawcett and his companions routinely encountered in the Amazon is difficult to read about without experiencing a viscerally unsettling glimmer of the sensations associated with each affliction. The biologist James Murray, who had accompanied Ernest Shackleton on his mission to Antarctica in 1907, joined Fawcett’s team for one of its journeys into the South American jungle four years later. This much different type of exploration didn’t turn out nearly as well for him. One of Fawcett’s sturdiest companions, Henry Costin, contracted malaria on that particular expedition and became delirious with the fever. “Murray, meanwhile,” Grann writes,
seemed to be literally coming apart. One of his fingers grew inflamed after brushing against a poisonous plant. Then the nail slid off, as if someone had removed it with pliers. Then his right hand developed, as he put it, a “very sick, deep suppurating wound,” which made it “agony” even to pitch his hammock. Then he was stricken with diarrhea. Then he woke up to find what looked like worms in his knee and arm. He peered closer. They were maggots growing inside him. He counted fifty around his elbow alone. “Very painful now and again when they move,” Murray wrote. (135)
The thick clouds of mosquitoes leave every traveler pocked and swollen and nearly all of them get sick sooner or later. On these journeys, according to Fawcett, “the healthy person was regarded as a freak, an exception, extraordinary” (100). This observation was somewhat boastful; Fawcett himself remained blessedly immune to contagion throughout most of his career as an explorer.
Hammocks are required at night to avoid poisonous or pestilence-carrying ants. Pit vipers abound. The men had to sleep with nets draped over them to ward off the incessantly swarming insects. Fawcett and his team even fell prey to even fell prey tovampire bats. “We awoke to find our hammocks saturated with blood,” he wrote, “for any part of our persons touching the mosquito-nets or protruding beyond them were attacked by the loathsome animals” (127). Such wounds, they knew, could spell their doom the next time they waded into the water of the Amazon. “When bathing,” Grann writes, “Fawcett nervously checked his body for boils and cuts. The first time he swam across the river, he said, ‘there was an unpleasant sinking feeling in the pit of my stomach.’ In addition to piranhas, he dreaded candirus and electric eels, or puraques”(91).
Candirus are tiny catfish notorious for squirming their way up human orifices like the urethra, where they remain lodged to parasitize the bloodstream (although this tendency of theirs turns out to be a myth). But piranhas and eels aren’t even the most menacing monsters in the Amazon. As Grann writes,
One day Fawcett spied something along the edge of the sluggish river. At first it looked like a fallen tree, but it began undulating toward the canoes. It was bigger than an electric eel, and when Fawcett’s companions saw it they screamed. Fawcett lifted his rifle and fired at the object until smoke filled the air. When the creature ceased to move, the men pulled a canoe alongside it. It was an anaconda. In his reports to the Royal Geographical Society, Fawcett insisted that it was longer than sixty feet. (92)
This was likely an exaggeration since the record documented length for an anaconda is just under 27 feet, and yet the men considered their mission a scientific one and so would’ve striven for objectivity. Fawcett even unsheathed his knife to slice off a piece of the snake’s flesh for a specimen jar, but as he broke the skin it jolted back to life and made a lunge at the men in the canoe who panicked and pulled desperately at the oars. Fawcett couldn’t convince his men to return for another attempt.
Though Fawcett had always been fascinated by stories of hidden treasures and forgotten civilizations, the ostensible purpose of his first trip into the Amazon Basin was a surveying mission. As an impartial member of the British Royal Geographical Society, he’d been commissioned by the Bolivian and Brazilian governments to map out their borders so they could avoid a land dispute. But over time another purpose began to consume Fawcett. “Inexplicably,” he wrote, “amazingly—I knew I loved that hell. Its fiendish grasp had captured me, and I wanted to see it again” (116). In 1911, the archeologist Hiram Bingham, with the help of local guides, discovered the colossal ruins of Machu Picchu high in the Peruvian Andes. News of the discovery “fired Fawcett’s imagination” (168), according to Grann, and he began cobbling together evidence he’d come across in the form of pottery shards and local folk histories into a theory about a lost civilization deep in the Amazon, in what many believed to be a “counterfeit paradise,” a lush forest that seemed abundantly capable of sustaining intense agriculture but in reality could only support humans who lived in sparsely scattered tribes.
Percy Harrison Fawcett’s character was in many ways an embodiment of some of the most paradoxical currents of his age. A white explorer determined to conquer unmapped regions, he was nonetheless appalled by his fellow Englishmen’s treatment of indigenous peoples in South America. At the time, rubber was for the Amazon what ivory was for the Belgian Congo, oil is today in the Middle East, and diamonds are in many parts of central and western Africa. When the Peruvian Amazon Company, a rubber outfit whose shares were sold on the London Stock Exchange, attempted to enslave Indians for cheap labor, it lead to violent resistance which culminated in widespread torture and massacre.
Sir Roger Casement, a British consul general who conducted an investigation of the PAC’s practices, determined that this one rubber company alone was responsible for the deaths of thirty thousand Indians. Grann writes,
Long before the Casement report became public, in 1912, Fawcett denounced the atrocities in British newspaper editorials and in meetings with government officials. He once called the slave traders “savages” and “scum.” Moreover, he knew that the rubber boom had made his own mission exceedingly more difficult and dangerous. Even previously friendly tribes were now hostile to foreigners. Fawcett was told of one party of eighty men in which “so many of them were killed with poisoned arrows that the rest abandoned the trip and retired”; other travelers were found buried up to their waists and left to be eaten by alive by fire ants, maggots, and bees. (90)
Fawcett, despite the ever looming threat of attack, was equally appalled by many of his fellow explorers’ readiness to resort to shooting at Indians who approached them in a threatening manner. He had much more sympathy for the Indian Protection Service, whose motto was, “Die if you must, but never kill” (163), but he prided himself on being able to come up with clever ways to entice tribesmen to let his teams pass through their territories without violence. Once, when arrows started raining down on his team’s canoes from the banks, he ordered his men not to flee and instead had one of them start playing his accordion while the rest of them sang to the tune—and it actually worked (148).
But Fawcett was no softy. He was notorious for pushing ahead at a breakneck pace and showing nothing but contempt for members of his own team who couldn’t keep up owing to a lack of conditioning or fell behind owing to sickness. James Murray, the veteran of Shackleton’s Antarctic expedition whose flesh had become infested with maggots, experienced Fawcett’s monomania for maintaining progress firsthand. “This calm admission of the willingness to abandon me,” Murray wrote, “was a queer thing to hear from an Englishman, though it did not surprise me, as I had gauged his character long before” (137). Eventually, Fawcett did put his journey on hold to search out a settlement where they might find help for the dying man. When they came across a frontiersman with a mule, they got him to agree to carry Murray out of the jungle, allowing the rest of the team to continue with their expedition. To everyone’s surprise, Murray, after disappearing for a while, turned up alive—and furious. “Murray accused Fawcett of all but trying to murder him,” Grann writes, “and was incensed that Fawcett had insinuated that he was a coward” (139).
The theory of a lost civilization crystalized in the explorer’s mind when he found a document written by a Portuguese bandeirante—soldier of fortune—describing “a large, hidden, and very ancient city… discovered in the year 1753” (180) while rummaging through old records at the National Library of Brazil. As Grann explains,
Fawcett narrowed down the location. He was sure that he had found proof of archaeological remains, including causeways and pottery, scattered throughout the Amazon. He even believed that there was more than a single ancient city—the one the bandeirante described was most likely, given the terrain, near the eastern Brazilian state of Bahia. But Fawcett, consulting archival records and interviewing tribesmen, had calculated that a monumental city, along with possibly even remnants of its population, was in the jungle surrounding the Xingu River in the Brazilian Mato Grasso. In keeping with his secretive nature, he gave the city a cryptic and alluring name, one that, in all his writings and interviews, he never explained. He called it simply Z. (182)
Fawcett was planning a mission for the specific purpose of finding Z when he was called by the Royal Geographical Society to serve in the First World War. The case for Z had been up till that point mostly based on scientific curiosity, though there was naturally a bit of the Indiana Jones dyad—“fortune and glory”—sharpening his already keen interest. Ever since Hernan Cortes marched into the Aztec city of Tenochtitlan in 1519, and Francisco Pizarro conquered Cuzco, the capital of the Inca Empire, fourteen years later, there had been rumors of a city overflowing with gold called El Dorado, literally “the gilded man,” after an account by the sixteenth century chronicler Gonzalo Fernandez de Oviedo of a king who covered his body every day in gold dust only to wash it away again at night (169-170). It’s impossible to tell how many thousands of men died while searching for that particular lost city.
Fawcett, however, when faced with the atrocities of industrial-scale war, began to imbue Z with an altogether different sort of meaning. As a young man, he and his older brother Edmund had been introduced to Buddhism and the occult by a controversial figure named Helena Petrovna Blavatsky. To her followers, she was simply Madame Blavatsky. “For a moment during the late nineteenth century,” Grann writes, “Blavatsky, who claimed to be psychic, seemed on the threshold of founding a lasting religious movement” (46). It was called theosophy—“wisdom of the gods.” “In the past, Fawcett’s interest in the occult had been largely an expression of his youthful rebellion and scientific curiosity,” Grann explains, “and had contributed to his willingness to defy the prevailing orthodoxies of his own society and to respect tribal legends and religions.” In the wake of horrors like the Battle of the Somme, though, he started taking otherworldly concerns much more seriously. According to Grann, at this point,
his approach was untethered from his rigorous RGS training and acute powers of observation. He imbibed Madame Blavatsky’s most outlandish teachings about Hyperboreans and astral bodies and Lords of the Dark Face and keys to unlocking the universe—the Other World seemingly more tantalizing than the present one. (190)
It was even rumored that Fawcett was basing some of his battlefield tactics on his use of a Ouija board.
Brian Fawcett, Percy’s son and compiler of his diaries and letters into the popular volume Expedition Fawcett, began considering the implications of his father’s shift away from science years after he and Brian’s older brother Jack had failed to return from Fawcett’s last mission in search of Z. Grann writes,
Brian started questioning some of the strange papers that he had found among his father’s collection, and never divulged. Originally, Fawcett had described Z in strictly scientific terms and with caution: “I do not assume that ‘The City’ is either large or rich.” But by 1924 Fawcett had filled his papers with reams of delirious writings about the end of the world and about a mystical Atlantean kingdom, which resembled the Garden of Eden. Z was transformed into “the cradle of all civilizations” and the center of one of Blavatsky’s “White Lodges,” where a group of higher spiritual beings help to direct the fate of the universe. Fawcett hoped to discover a White Lodge that had been there since “the time of Atlantis,” and to attain transcendence. Brian wrote in his diary, “Was Daddy’s whole conception of ‘Z,’ a spiritual objective, and the manner of reaching it a religious allegory?” (299)
Grann suggests that the success of Blavatsky and others like her was a response to the growing influence of science and industrialization. “The rise of science in the nineteenth century had had a paradoxical effect,” he writes:
while it undermined faith in Christianity and the literal word of the Bible, it also created an enormous void for someone to explain the mysteries of the universe that lay beyond microbes and evolution and capitalist greed… The new powers of science to harness invisible forces often made these beliefs seem more credible, not less. If phonographs could capture human voices, and if telegraphs could send messages from one continent to the other, then couldn’t science eventually peel back the Other World? (47)
Even Arthur Conan Doyle, who was a close friend of Fawcett and whose book The Lost World was inspired by Fawcett’s accounts of his expeditions in the Amazon, was an ardent supporter of investigations into the occult. Grann quotes him as saying, “I suppose I am Sherlock Holmes, if anybody is, and I say that the case for spiritualism is absolutely proved” (48).
But pseudoscience—equal parts fraud and self-delusion—was at least a century old by the time H.P. Blavatsky began peddling it, and, tragically, ominously, it’s alive and well today. In the 1780s, electro-magnetism was the invisible force whose nature was being brought to light by science. The German physician Franz Anton Mesmer, from whom we get the term “mesmerize,” took advantage of these discoveries by positing a force called “animal magnetism” that runs through the bodies of all living things. Mesmer spent most of the decade in Paris, and in 1784 King Louis XVI was persuaded to appoint a committee to investigate Mesmer’s claims. One of the committee members, Benjamin Franklin, you’ll recall, knew something about electricity. Mesmer in fact liked to use one of Franklin’s own inventions, the glass harmonica (not that type of harmonica), as a prop for his dramatic demonstrations. The chemist and pioneer of science Antoine Lavoisier was the lead investigator though. (Ten years after serving on the committee, Lavoisier would fall victim to the invention of yet another member, Dr. Guillotine.)
Mesmer claimed that illnesses were caused by blockages in the flow of animal magnetism through the body, and he carried around a stack of printed testimonials on the effectiveness of his cures. If the idea of energy blockage as the cause of sickness sounds familiar to you, so too will Mesmer’s methods for unblocking them. He, or one of his “adepts,” would establish some kind of physical contact so they could find the body’s magnetic poles. It usually involved prolonged eye contact and would eventually lead to a “crisis,” which meant the subject would fall back and begin to shake all over until she (they were predominantly women) lost consciousness. If you’ve seen scenes of faith healers in action, you have the general idea. After undergoing several exposures to this magnetic treatment culminating in crisis, the suffering would supposedly abate and the mesmerist would chalk up another cure. Tellingly, when Mesmer caught wind of some of the experimental methods the committee planned to use he refused to participate. But then a man named Charles Deslon, one of Mesmer’s chief disciples, stepped up.
The list of ways Lavoisier devised to test the effectiveness of Deslon’s ministrations is long and amusing. At one point, he blindfolded a woman Deslon had treated before, telling her she was being magnetized right then and there, even though Deslon wasn’t even in the room. The suggestion alone was nonetheless sufficient to induce a classic crisis. In another experiment, the men replaced a door in Franklin’s house with a paper partition and had a seamstress who was supposed to be especially sensitive to magnetic effects sit in a chair with its back against the paper. For half an hour, an adept on the other side of the partition attempted to magnetize her through the paper, but all the while she just kept chatting amiably with the gentlemen in the room. When the adept finally revealed himself, though, he was able to induce a crisis in her immediately. The ideas of animal magnetism and magnetic cures were declared a total sham.
Lafayette, who brought French reinforcements to the Americans in the early 1780s, hadn’t heard about the debunking and tried to introduce the practice of mesmerism to the newly born country. But another prominent student of the Enlightenment, Thomas Jefferson, would have none of it.
Madame Blavatsky was cagey enough never to allow the supernatural abilities she claimed to have be put to the test. But around the same time Fawcett was exploring the Amazon another of Conan Doyle’s close friends, the magician and escape artist Harry Houdini, was busy conducting explorations of his own into the realm of spirits. They began in 1913 when Houdini’s mother died and, grief-stricken, he turned to mediums in an effort to reconnect with her. What happened instead was that, one after another, he caught out every medium in some type of trickery and found he was able to explain the deceptions behind all the supposedly supernatural occurrences of the séances he attended. Seeing the spiritualists as fraudsters exploiting the pain of their marks, Houdini became enraged. He ended up attending hundreds of séances, usually disguised as an old lady, and as soon as he caught the medium performing some type of trickery he would stand up, remove the disguise, and proclaim, “I am Houdini, and you are a fraud.”
Houdini went on to write an exposé, A Magician among the Spirits, and he liked to incorporate common elements of séances into his stage shows to demonstrate how easy they were for a good magician to recreate. In 1922, two years before Fawcett disappeared with his son Jack while searching for Z, Scientific American Magazine asked Houdini to serve on a committee to further investigate the claims of spiritualists. The magazine even offered a cash prize to anyone who could meet some basic standards of evidence to establish the validity of their claims. The prize went unclaimed. After Houdini declared one of Conan Doyle's favorite mediums a fraud, the two men had a bitter falling out, the latter declaring the prior an enemy of his cause. (Conan Doyle was convinced Houdini himself must've had supernatural powers and was inadvertently using them to sabotage the mediums.) The James Randi Educational Foundation, whose founder also began as a magician but then became an investigator of paranormal claims, currently offers a considerably larger cash prize (a million dollars) to anyone who can pass some well-designed test and prove they have psychic powers. To date, a thousand applicants have tried to win the prize, but none have made it through preliminary testing.
So Percy Fawcett was searching, it seems, for two very different cities; one was based on evidence of a pre-Columbian society and the other was a product of his spiritual longing. Grann writes about a businessman who insists Fawcett disappeared because he actually reached this second version of Z, where he transformed into some kind of pure energy, just as James Redfield suggests happened to the entire Mayan civilization in his New Age novel The Celestine Prophecy. Apparently, you can take pilgrimages to a cave where Fawcett found this portal to the Other World. The website titled “The Great Web of Percy Harrison Fawcett” enjoins visitors: “Follow your destiny to Ibez where Colonel Fawcett lives an everlasting life.”
Today’s spiritualists and pseudoscientists rely more heavily on deliberately distorted and woefully dishonest references to quantum physics than they do on magnetism. But the differences are only superficial. The fundamental shift that occurred with the advent of science was that ideas could now be divided—some with more certainty than others—into two categories: those supported by sound methods and a steadfast devotion to following the evidence wherever it leads and those that emerge more from vague intuitions and wishful thinking. No sooner had science begun to resemble what it is today than people started trying to smuggle their favorite superstitions across the divide.
Not much separates New Age thinking from spiritualism—or either of them from long-established religion. They all speak to universal and timeless human desires. Following the evidence wherever it leads often means having to reconcile yourself to hard truths. As Carl Sagan writes in his indispensable paean to scientific thinking, Demon-Haunted World,
Pseudoscience speaks to powerful emotional needs that science often leaves unfulfilled. It caters to fantasies about personal powers we lack and long for… In some of its manifestations, it offers satisfaction for spiritual hungers, cures for disease, promises that death is not the end. It reassures us of our cosmic centrality and importance… At the heart of some pseudoscience (and some religion also, New Age and Old) is the idea that wishing makes it so. How satisfying it would be, as in folklore and children’s stories, to fulfill our heart’s desire just by wishing. How seductive this notion is, especially when compared with the hard work and good luck usually required to achieve our hopes. (14)
As the website for one of the most recent New Age sensations, The Secret, explains, “The Secret teaches us that we create our lives with every thought every minute of every day.” (It might be fun to compare The Secret to Madame Blavatsky’s magnum opus The Secret Doctrine—but not my kind of fun.)
That spiritualism and pseudoscience satisfy emotional longings raises the question: what’s the harm in entertaining them? Isn’t it a little cruel for skeptics like Lavoisier, Houdini, and Randi to go around taking the wrecking ball to people’s beliefs, which they presumably depend on for consolation, meaning, and hope? Indeed, the wildfire of credulity, charlatanry, and consumerist epistemology—whereby you’re encouraged to believe whatever makes you look and feel the best—is no justification for hostility toward believers. The hucksters, self-deluded or otherwise, who profit from promulgating nonsense do however deserve, in my opinion, to be very publicly humiliated. Sagan points out too that when we simply keep quiet in response to other people making proclamations we know to be absurd, “we abet a general climate in which skepticism is considered impolite, science tiresome, and rigorous thinking somehow stuffy and inappropriate” (298). In such a climate,
Spurious accounts that snare the gullible are readily available. Skeptical treatments are much harder to find. Skepticism does not sell well. A bright and curious person who relies entirely on popular culture to be informed about something like Atlantis is hundreds or thousands of times more likely to come upon a fable treated uncritically than a sober and balanced assessment. (5)
Consumerist epistemology is also the reason why creationism and climate change denialism are immune from refutation—and is likely responsible for the difficulty we face in trying to bridge the political divide. No one can decide what should constitute evidence when everyone is following some inner intuitive light to the truth. On a more personal scale, you forfeit any chance you have at genuine discovery—either outward or inward—when you drastically lower the bar for acceptable truths to make sure all the things you really want to be true can easily clear it.
On the other hand, there are also plenty of people out there given to rolling their eyes anytime they’re informed of strangers’ astrological signs moments after meeting them (the last woman I met is a Libra). It’s not just skeptics and trained scientists who sense something flimsy and immature in the characters of New Agers and the trippy hippies. That’s probably why people are so eager to take on burdens and experience hardship in the name of their beliefs. That’s probably at least part of the reason too why people risk their lives exploring jungles and wildernesses. If a dude in a tie-dye shirt says he discovered some secret, sacred truth while tripping on acid, you’re not going to take him anywhere near as seriously as you do people like Joseph Conrad, who journeyed into the heart of darkness, or Percy Fawcett, who braved the deadly Amazon in search of ancient wisdom.
The story of the Fawcett mission undertaken in the name of exploration and scientific progress actually has a happy ending—one you don’t have to be a crackpot or a dupe to appreciate. Fawcett himself may not have had the benefit of modern imaging and surveying tools, but he was also probably too distracted by fantasies of White Lodges to see much of the evidence at his feet. David Grann made a final stop on his own Amazon journey to seek out the Kuikuro Indians and the archeologist who was staying with them, Michael Heckenberger. Grann writes,
Altogether, he had uncovered twenty pre-Columbian settlements in the Xingu, which had been occupied roughly between A.D. 800 and A.D. 1600. The settlements were about two to three miles apart and were connected by roads. More astounding, the plazas were laid out along cardinal points, from east to west, and the roads were positioned at the same geometric angles. (Fawcett said that Indians had told him legends that described “many streets set at right angles to one another.”) (313)
These were the types of settlements Fawcett had discovered real evidence for. They probably wouldn’t have been of much interest to spiritualists, but their importance to the fields of archeology and anthropology are immense. Grann records from his interview:
“Anthropologists,” Heckenberger said, “made the mistake of coming into the Amazon in the twentieth century and seeing only small tribes and saying, ‘Well, that’s all there is.’ The problem is that, by then, many Indian populations had already been wiped out by what was essentially a holocaust from European contact. That’s why the first Europeans in the Amazon described such massive settlements that, later, no one could ever find.” (317)
Carl Sagan describes a “soaring sense of wonder” as a key ingredient to both good science and bad. Pseudoscience triggers our wonder switches with heedless abandon. But every once in a while findings that are backed up with solid evidence are just as satisfying. “For a thousand years,” Heckenberger explains to Grann,
the Xinguanos had maintained artistic and cultural traditions from this highly advanced, highly structured civilization. He said, for instance, that the present-day Kuikuro village was still organized along the east and west cardinal points and its paths were aligned at right angles, though its residents no longer knew why this was the preferred pattern. Heckenberger added that he had taken a piece of pottery from the ruins and shown it to a local maker of ceramics. It was so similar to present-day pottery, with its painted exterior and reddish clay, that the potter insisted it had been made recently…. “To tell you the honest-to-God truth, I don’t think there is anywhere in the world where there isn’t written history where the continuity is so clear as right here,” Heckenberger said. (318)
[The PBS series "Secrets of the Dead" devoted a show to Fawcett.]
Also read
And:
The Self-Transcendence Price Tag: A Review of Alex Stone's Fooling Houdini
How to be Interesting: Dead Poets and Causal Inferences
Henry James famously wrote, “The only obligation to which in advance we may hold a novel without incurring the accusation of being arbitrary, is that it be interesting.” But how does one be interesting? The answer probably lies in staying one step ahead of your readers, but every reader moves at their own pace.
No one writes a novel without ever having read one. Though storytelling comes naturally to us as humans, our appreciation of the lengthy, intricately rendered narratives we find spanning the hundreds of pages between book covers is contingent on a long history of crucial developments, literacy for instance. In the case of an individual reader, the faithfulness with which ontogeny recapitulates phylogeny will largely determine the level of interest taken in any given work of fiction. In other words, to appreciate a work, it is necessary to have some knowledge of the literary tradition to which it belongs. T.S. Eliot’s famous 1919 essay “Tradition and the Individual Talent” eulogizes great writers as breathing embodiments of the entire history of their art. “The poet must be very conscious of the main current,” Eliot writes,
which does not at all flow invariably through the most distinguished reputations. He must be quite aware of the obvious fact that art never improves, but that the material of art is never quite the same. He must be aware that the mind of Europe—the mind of his own country—a mind which he learns in time to be much more important than his own private mind—is a mind which changes, and that this change is a development which abandons nothing en route, which does not superannuate either Shakespeare, or Homer, or the rock of the Magdalenian draughtsmen.
Though Eliot probably didn’t mean to suggest that to write a good poem or novel you have to have thoroughly mastered every word of world literature, a condition that would’ve excluded most efforts even at the time he wrote the essay, he did believe that to fully understand a work you have to be able to place it in its proper historical context. “No poet,” he wrote,
no artist of any art, has his complete meaning alone. His significance, his appreciation is the appreciation of his relation to the dead poets and artists. You cannot value him alone; you must set him, for contrast and comparison, among the dead.
If this formulation for what goes into the appreciation of art is valid, then as time passes and historical precedents accumulate the burden of knowledge that must be shouldered to sustain adequate interest in or appreciation for works in the tradition will be getting constantly bigger. Accordingly, the number of people who can manage it will be getting constantly smaller.
But what if there is something like a threshold awareness of literary tradition—or even of current literary convention—beyond which the past ceases to be the most important factor influencing your appreciation for a particular work? Once your reading comprehension is up to snuff and you’ve learned how to deal with some basic strategies of perspective—first person, third person omniscient, etc.—then you’re free to interpret stories not merely as representative of some tradition but of potentially real people and events, reflective of some theme that has real meaning in most people’s lives. Far from seeing the task of the poet or novelist as serving as a vessel for artistic tradition, Henry James suggests in his 1884 essay “The Art of Fiction” that
The only obligation to which in advance we may hold a novel without incurring the accusation of being arbitrary, is that it be interesting. That general responsibility rests upon it, but it is the only one I can think of. The ways in which it is at liberty to accomplish this result (of interesting us) strike me as innumerable and such as can only suffer from being marked out, or fenced in, by prescription. They are as various as the temperament of man, and they are successful in proportion as they reveal a particular mind, different from others. A novel is in its broadest definition a personal impression of life; that, to begin with, constitutes its value, which is greater or less according to the intensity of the impression.
Writing for dead poets the way Eliot suggests may lead to works that are historically interesting. But a novel whose primary purpose is to represent, say, Homer’s Odyssey in some abstract way, a novel which, in other words, takes a piece of literature as its subject matter rather than some aspect of life as it is lived by humans, will likely only ever be interesting to academics. This isn’t to say that writers of the past ought to be ignored; rather, their continuing relevance is likely attributable to their works’ success in being interesting. So when you read Homer you shouldn’t be wondering how you might artistically reconceptualize his epics—you should be attending to the techniques that make them interesting and wondering how you might apply them in your own work, which strives to artistically represent some aspect of live. You go to past art for technical or thematic inspiration, not for traditions with which to carry on some dynamic exchange.
Representation should, as a rule of thumb, take priority over tradition. And to insist, as Eliot does, as an obvious fact or otherwise, that artistic techniques never improve is to admit defeat before taking on the challenge. But this leaves us with the question of how, beyond a devotion to faithful representations of recognizably living details, one manages to be interesting. Things tend to interest us when they’re novel or surprising. That babies direct their attention to incidents which go against their expectations is what allows us to examine what those expectations are. Babies, like their older counterparts, stare longer at bizarre occurrences. If a story consisted of nothing but surprising incidents, however, we would probably lose interest in it pretty quickly because it would strike us as chaotic and incoherent. Citing research showing that while surprise is necessary in securing the interest of readers but not sufficient, Sung-Il Kim, a psychologist at Korea University, explains that whatever incongruity causes the surprise must somehow be resolved. In other words, the surprise has to make sense in the shifted context.
In Aspects of the Novel, E.M. Forster makes his famous distinction between flat and round characters with reference to the latter’s ability to surprise readers. He notes however that surprise is only half the formula, since a character who only surprises would seem merely erratic—or would seem like something other than a real person. He writes,
The test of a round character is whether it is capable of surprising in a convincing way. If it never surprises, it is flat. If it does not convince, it is a flat pretending to be round. It has the incalculability of life about it—life within the pages of a book. And by using it sometimes alone, more often in combination with the other kind, the novelist achieves his task of acclimatization and harmonizes the human race with the other aspects of his work. (78)
Kim discovered that this same dynamic is at play even in the basic of unit of a single described event, suggesting that the convincing surprise is important for all aspects of the story, not just character. He went on to test the theory that what lies at the heart of our interest in these seeming incongruities that are in time resolved is our tendency to anticipate the resolution. When a brief description involves some element that must be inferred, it is considered more interesting, and it proves more memorable, than when the same incident is described in full detail without any demand for inference. However, when researchers rudely distract readers in experiments, keeping them from being able to infer, the differences in recall and reported interest vanish.
Kim proposes a “causal bridging inference” theory to explain what makes a story interesting. If there aren’t enough inferences to be made, the story seems boring and banal. But if there are too many then the reader gets overwhelmed and spaces out. “Whether inferences are drawn or not,” Kim writes,
depends on two factors: the amount of background knowledge a reader possesses and the structure of a story… In a real life situation, for example, people are interested in new scientific theories, new fashion styles, or new leading-edge products only when they have an adequate amount of background knowledge on the domain to fill the gap between the old and the new… When a story contains such detailed information that there is no gap to fill in, a reader does not need to generate inferences. In this case, the story would not be interesting even if the reader possessed a great deal of background knowledge. (69)
One old-fashioned and intuitive way of thinking about causal bridge inference theory is to see the task of a writer as keeping one or two steps ahead of the reader. If the story runs ahead by more than a few steps it risks being too difficult to follow and the reader gets lost. If it falls behind, it drags, like the boor who relishes the limelight and so stretches out his anecdotes with excruciatingly superfluous detail.
For a writer, the takeaway is that you want to shock and surprise your readers, which means making your story take unexpected, incongruous turns, but you should also seed the narrative with what in hindsight can be seen as hints to what’s to come so that the surprises never seem random or arbitrary—and so that the reader is trained to seek out further clues to make further inferences. This is what Forster meant when he said characters should change in ways that are both surprising and convincing. It’s perhaps a greater achievement to have character development, plot, and language integrated so that an inevitable surprise in one of these areas has implications for or bleeds into both the others. But we as readers can enjoy on its own an unlikely observation or surprising analogy that we discover upon reflection to be fitting. And of course we can enjoy a good plot twist in isolation too—witness Hollywood and genre fiction.
Naturally, some readers can be counted on to be better at making inferences than others. As Kim points out, this greater ability may be based on a broader knowledge base; if the author makes an allusion, for instance, it helps to know about the subject being alluded to. It can also be based on comprehension skills, awareness of genre conventions, understanding of the physical or psychological forces at play in the plot, and so on. The implication is that keeping those crucial two steps ahead, no more no less, means targeting readers who are just about as good at making inferences as you are and working hard through inspiration, planning, and revision to maintain your lead. If you’re erudite and agile of mind, you’re going to bore yourself trying to write for those significantly less so—and those significantly less so are going to find what is keenly stimulating for you to write all but impossible to comprehend.
Interestingness is also influenced by fundamental properties of stories like subject matter—Percy Fawcett explores the Amazon in search of the lost City of Z is more interesting than Margaret goes grocery shopping—and the personality traits of characters that influence the degree to which we sympathize with them. But technical virtuosity often supersedes things like topic and character. A great writer can write about a boring character in an interesting way. Interestingly, however, the benefit in interest won through mastery of technique will only be appreciated by those capable of inferring the meaning hinted at by the narration, those able to make the proper conceptual readjustments to accommodate surprising shifts in context and meaning. When mixed martial arts first became popular, for instance, audiences roared over knockouts and body slams, and yawned over everything else. But as Joe Rogan has remarked from ringside at events over the past few years fans have become so sophisticated that they cheer when one fighter passes the other’s guard.
What this means is that no matter how steadfast your devotion to representation, assuming your skills continually develop, there will be a point of diminishing returns, a point where improving as a writer will mean your work has greater interest but to a shrinking audience. My favorite illustration of this dilemma is Steven Millhauser’s parable “In the Reign of Harad IV,” in which “a maker of miniatures” carves and sculpts tiny representations of a king’s favorite possessions. Over time, though, the miniaturist ceases to care about any praise he receives from the king or anyone else at court and begins working to satisfy an inner “stirring of restlessness.” His creations become smaller and smaller, necessitating greater and greater magnification tools to appreciate. No matter how infinitesimal he manages to make his miniatures, upon completion of each work he seeks “a farther kingdom.” It’s one of the most interesting short stories I’ve read in a while.
Causal Bridging Inference Model:
What's the Point of Difficult Reading?
For every John Galt, Tony Robbins, and Scheherazade, we may need at least half a Proust. We are still, however, left with quite a dilemma. Some authors really are just assholes who write worthless tomes designed to trick you into wasting your time. But some books that seem impenetrable on the first attempt will reward your efforts to decipher them.
You sit reading the first dozen or so pages of some celebrated classic and gradually realize that having to sort out how the ends of the long sentences fix to their beginnings is taking just enough effort to distract you entirely from the setting or character you’re supposed to be getting to know. After a handful of words you swear are made up and a few tangled metaphors you find yourself riddling over with nary a resolution, the dread sinks in. Is the whole book going to be like this? Is it going to be one of those deals where you get to what’s clearly meant to be a crucial turning point in the plot but for you is just another riddle without a solution, sending you paging back through the forest of verbiage in search of some key succession of paragraphs you spaced out while reading the first time through? Then you wonder if you’re missing some other kind of key, like maybe the story’s an allegory, a reference to some historical event like World War II or some Revolution you once had to learn about but have since lost all recollection of. Maybe the insoluble similes are allusions to some other work you haven’t read or can’t recall. In any case, you’re not getting anything out of this celebrated classic but frustration leading to the dual suspicion that you’re too ignorant or stupid to enjoy great literature and that the whole “great literature” thing is just a conspiracy to trick us into feeling dumb so we’ll defer to the pseudo-wisdom of Ivory Tower elites.
If enough people of sufficient status get together and agree to extol a work of fiction, they can get almost everyone else to agree. The readers who get nothing out of it but frustration and boredom assume that since their professors or some critic in a fancy-pants magazine or the judges of some literary award committee think it’s great they must simply be missing something. They dutifully continue reading it, parrot a few points from a review that sound clever, and afterward toe the line by agreeing that it is indeed a great work of literature, clearly, even if it doesn’t speak to them personally. For instance, James Joyce’s Ulysses, utterly nonsensical to anyone without at least a master’s degree, tops the Modern Library’s list of 100 best novels in the English language. Responding to the urging of his friends to write out an explanation of the novel, Joyce scoffed, boasting,
I’ve put in so many enigmas and puzzles that it will keep the professors busy for centuries arguing over what I meant, and that’s the only way of ensuring one’s immortality.
He was right. To this day, professors continue to love him even as Ulysses and the even greater monstrosity Finnegan’s Wake do nothing but bore and befuddle everyone else—or else, more fittingly, sit inert or unchecked-out on the shelf, gathering well-deserved dust.
Joyce’s later novels are not literature; they are lengthy collections of loosely connected literary puzzles. But at least his puzzles have actual solutions—or so I’m told. Ulysses represents the apotheosis of the tradition in literature called modernism. What came next, postmodernism, is even more disconnected from the universal human passion for narrative. Even professors aren’t sure what to do with it, so they simply throw their hands up, say it’s great, and explain that the source of its greatness is its very resistance to explanation. Jonathan Franzen, whose 2001 novel The Corrections represented a major departure from the postmodernism he began his career experimenting with, explained the following year in The New Yorker how he’d turned away from the tradition. He’d been reading the work of William Gaddis “as a kind of penance” (101) and not getting any meaning out of it. Of the final piece in the celebrated author’s oeuvre, Franzen writes,
The novel is an example of the particular corrosiveness of literary postmodernism. Gaddis began his career with a Modernist epic about the forgery of masterpieces. He ended it with a pomo romp that superficially resembles a masterpiece but punishes the reader who tries to stay with it and follow its logic. When the reader finally says, Hey, wait a minute, this is a mess, not a masterpiece, the book instantly morphs into a performance-art prop: its fraudulence is the whole point! And the reader is out twenty hours of good-faith effort. (111)
In other words, reading postmodern fiction means not only forgoing the rewards of narratives, having them replaced by the more taxing endeavor of solving multiple riddles in succession, but those riddles don’t even have answers. What’s the point of reading this crap? Exactly. Get it?
You can dig deeper into the meaningless meanderings of pomos and discover there is in fact an ideology inspiring all the infuriating inanity. The super smart people who write and read this stuff point to the willing, eager complicity of the common reader in the propagation of all the lies that sustain our atrociously unjust society (but atrociously unjust compared to what?). Franzen refers to this as the Fallacy of the Stupid Reader,
wherein difficulty is a “strategy” to protect art from cooptation and the purpose of art is to “upset” or “compel” or “challenge” or “subvert” or “scar” the unsuspecting reader; as if the writer’s audience somehow consisted, again and again, of Charlie Browns running to kick Lucy’s football; as if it were a virtue in a novelist to be the kind of boor who propagandizes at friendly social gatherings. (109)
But if the author is worried about art becoming a commodity does making the art shitty really amount to a solution? And if the goal is to make readers rethink something they take for granted why not bring the matter up directly, or have a character wrestle with it, or have a character argue with another character about it? The sad fact is that these authors probably just suck, that, as Franzen suspects, “literary difficulty can operate as a smoke screen for an author who has nothing interesting, wise, or entertaining to say” (111).
Not all difficulty in fiction is a smoke screen though. Not all the literary emperors are naked. Franzen writes that “there is no headache like the headache you get from working harder on deciphering a text than the author, by all appearances, has worked on assembling it.” But the essay, titled “Mr. Difficult,” begins with a reader complaint sent not to Gaddis but to Franzen himself. And the reader, a Mrs. M. from Maryland, really gives him the business:
Who is it that you are writing for? It surely could not be the average person who enjoys a good read… The elite of New York, the elite who are beautiful, thin, anorexic, neurotic, sophisticated, don’t smoke, have abortions tri-yearly, are antiseptic, live in penthouses, this superior species of humanity who read Harper’s and The New Yorker. (100)
In this first part of the essay, Franzen introduces a dilemma that sets up his explanation of why he turned away from postmodernism—he’s an adherent of the “Contract model” of literature, whereby the author agrees to share, on equal footing, an entertaining or in some other way gratifying experience, as opposed to the “Status model,” whereby the author demonstrates his or her genius and if you don’t get it, tough. But his coming to a supposed agreement with Mrs. M. about writers like Gaddis doesn’t really resolve Mrs. M.’s conflict with him.
The Corrections, after all, the novel she was responding to, represents his turning away from the tradition Gaddis wrote in. (It must be said, though, that Freedom, Franzen’s next novel, is written in a still more accessible style.)
The first thing we must do to respond properly to Mrs. M. is break down each of Franzen’s models into two categories. The status model includes writers like Gaddis whose difficulty serves no purpose but to frustrate and alienate readers. But Franzen’s own type specimen for this model is Flaubert, much of whose writing, though difficult at first, rewards any effort to re-read and further comprehend with a more profound connection. So it is for countless other writers, the one behind number two on the Modern Library’s ranking for instance—Fitzgerald and Gatsby. As for the contract model, Franzen admits,
Taken to its free-market extreme, Contract stipulates that if a product is disagreeable to you the fault must be the product’s. If you crack a tooth on a hard word in a novel, you sue the author. If your professor puts Dreiser on your reading list, you write a harsh student evaluation… You’re the consumer; you rule. (100)
Franzen, in declaring himself a “Contract kind of person,” assumes that the free-market extreme can be dismissed for its extremity. But Mrs. M. would probably challenge him on that. For many, particularly right-leaning readers, the market not only can but should be relied on to determine which books are good and which ones belong in some tiny niche. When the Modern Library conducted a readers' poll to create a popular ranking to balance the one made by experts, the ballot was stuffed by Ayn Rand acolytes and scientologists. Mrs. M. herself leaves little doubt as to her political sympathies. For her and her fellow travelers, things like literature departments, National Book Awards—like the one The Corrections won—Nobels and Pulitzers are all an evil form of intervention into the sacred workings of the divine free market, un-American, sacrilegious, communist. According to this line of thinking, authors aren’t much different from whores—except of course literal whoring is condemned in the bible (except when it isn’t).
A contract with readers who score high on the personality dimension of openness to new ideas and experiences (who tend to be liberal), those who have spent a lot of time in the past reading books like The Great Gatsby or Heart of Darkness or Lolita (the horror!), those who read enough to have developed finely honed comprehension skills—that contract is going to look quite a bit different from one with readers who attend Beck University, those for whom Atlas Shrugged is the height of literary excellence. At the same time, though, the cult of self-esteem is poisoning schools and homes with the idea that suggesting that a student or son or daughter is anything other than a budding genius is a form of abuse. Heaven forbid a young person feel judged or criticized while speaking or writing. And if an author makes you feel the least bit dumb or ignorant, well, it’s an outrage—heroes like Mrs. M. to the rescue.
One of the problems with the cult of self-esteem is that anticipating criticism tends to make people more, not less creative. And the link between low self-esteem and mental disorders is almost purely mythical. High self-esteem is correlated with school performance, but as far as researchers can tell it’s the performance causing the esteem, not the other way around. More invidious, though, is the tendency to view anything that takes a great deal of education or intelligence to accomplish as an affront to everyone less educated or intelligent. Conservatives complain endlessly about class warfare and envy of the rich—the financially elite—but they have no qualms about decrying intellectual elites and condemning them for flaunting their superior literary achievements. They see the elitist mote in the eye of Nobel laureates without noticing the beam in their own.
What’s the point of difficult reading? Well, what’s the point of running five or ten miles? What’s the point of eating vegetables as opposed to ice cream or Doritos? Difficulty need not preclude enjoyment. And discipline in the present is often rewarded in the future. It very well may be that the complexity of the ideas you’re capable of understanding is influenced by how many complex ideas you attempt to understand. No matter how vehemently true believers in the magic of markets insist otherwise, markets don’t have minds. And though an individual’s intelligence need not be fixed a good way to ensure children never get any smarter than they already are is to make them feel fantastically wonderful about their mediocrity. We just have to hope that despite these ideological traps there are enough people out there determined to wrap their minds around complex situations depicted in complex narratives about complex people told in complex language, people who will in the process develop the types of minds and intelligence necessary to lead the rest of our lazy asses into a future that’s livable and enjoyable. For every John Galt, Tony Robbins, and Scheherazade, we may need at least half a Proust. We are still, however, left with quite a dilemma. Some authors really are just assholes who write worthless tomes designed to trick you into wasting your time. But some books that seem impenetrable on the first attempt will reward your efforts to decipher them. How do we get the rewards without wasting our time?
Can’t Win for Losing: Why There Are So Many Losers in Literature and Why It Has to Change
David Lurie’s position in Disgrace is similar to that of John Proctor in The Crucible (although this doesn’t come out nearly as much in the movie version). And it’s hard not to see feminism in its current manifestations—along with Marxism and postcolonialism—as a pernicious new breed of McCarthyism infecting academia and wreaking havoc with men and literature alike.
Ironically, the author of The Golden Notebook, celebrating its 50th anniversary this year, and considered by many a “feminist bible,” happens to be an outspoken critic of feminism. When asked in a 1982 interview with Lesley Hazelton about her response to readers who felt some of her later works were betrayals of the women whose cause she once championed, Doris Lessing replied,
What the feminists want of me is something they haven't examined because it comes from religion. They want me to bear witness. What they would really like me to say is, ‘Ha, sisters, I stand with you side by side in your struggle toward the golden dawn where all those beastly men are no more.’ Do they really want people to make oversimplified statements about men and women? In fact, they do. I've come with great regret to this conclusion.
Lessing has also been accused of being overly harsh—“castrating”—to men, too many of whom she believes roll over a bit too easily when challenged by women aspiring to empowerment. As a famous novelist, however, who would go on to win the Nobel prize in literature in 2007, she got to visit a lot of schools, and it gradually dawned on her that it wasn’t so much that men were rolling over but rather that they were being trained from childhood to be ashamed of their maleness. In a lecture she gave to the Edinburgh book festival in 2001, she said,
Great things have been achieved through feminism. We now have pretty much equality at least on the pay and opportunities front, though almost nothing has been done on child care, the real liberation. We have many wonderful, clever, powerful women everywhere, but what is happening to men? Why did this have to be at the cost of men? I was in a class of nine- and 10-year-olds, girls and boys, and this young woman was telling these kids that the reason for wars was the innately violent nature of men. You could see the little girls, fat with complacency and conceit while the little boys sat there crumpled, apologising for their existence, thinking this was going to be the pattern of their lives.
Lessing describes how the teacher kept casting glances expectant of her approval as she excoriated these impressionable children.
Elaine Blair, in “Great American Losers,” an essay that’s equal parts trenchant and infuriatingly obtuse, describes a dynamic in contemporary fiction that’s similar to the one Lessing saw playing out in the classroom.
The man who feels himself unloved and unlovable—this is a character that we know well from the latest generation or two of American novels. His trials are often played for sympathetic laughs. His loserdom is total: it extends to his stunted career, his squalid living quarters, his deep unease in the world.
At the heart of this loserdom is his auto-manifesting knowledge that women don’t like him. As opposed to men of earlier generations who felt entitled to a woman’s respect and admiration, Blair sees this modern male character as being “the opposite of entitled: he approaches women cringingly, bracing for a slap.” This desperation on the part of male characters to avoid offending women, to prove themselves capable of sublimating their own masculinity so they can be worthy of them, finds its source in the authors themselves. Blair writes,
Our American male novelists, I suspect, are worried about being unloved as writers—specifically by the female reader. This is the larger humiliation looming behind the many smaller fictional humiliations of their heroes, and we can see it in the way the characters’ rituals of self-loathing are tacitly performed for the benefit of an imagined female audience.
Blair quotes a review David Foster Wallace wrote of a John Updike novel to illustrate how conscious males writing literature today are of their female readers’ hostility toward men who write about sex and women without apologizing for liking sex and women—sometimes even outside the bounds of caring, committed relationships. Labeling Updike as a “Great Male Narcissist,” a distinction he shares with writers like Philip Roth and Norman Mailer, Wallace writes,
Most of the literary readers I know personally are under forty, and a fair number are female, and none of them are big admirers of the postwar GMNs. But it’s John Updike in particular that a lot of them seem to hate. And not merely his books, for some reason—mention the poor man himself and you have to jump back:
“Just a penis with a thesaurus.”
“Has the son of a bitch ever had one unpublished thought?”
“Makes misogyny seem literary the same way Rush
[Limbaugh] makes fascism seem funny.”
And trust me: these are actual quotations, and I’ve heard even
worse ones, and they’re all usually accompanied by the sort of
facial expressions where you can tell there’s not going to be
any profit in appealing to the intentional fallacy or talking
about the sheer aesthetic pleasure of Updike’s prose.
Since Wallace is ready to “jump back” at the mere mention of Updike’s name, it’s no wonder he’s given to writing about characters who approach women “cringingly, bracing for a slap.”
Blair goes on to quote from Jonathan Franzen’s novel The Corrections, painting a plausible picture of male writers who fear not only that their books will be condemned if too misogynistic—a relative term which has come to mean "not as radically feminist as me"—but they themselves will be rejected. In Franzen’s novel, Chip Lambert has written a screenplay and asked his girlfriend Julia to give him her opinion. She holds off doing so, however, until after she breaks up with him and is on her way out the door. “For a woman reading it,” she says, “it’s sort of like the poultry department. Breast, breast, breast, thigh, leg” (26). Franzen describes his character’s response to the critique:
It seemed to Chip that Julia was leaving him because “The Academy Purple” had too many breast references and a draggy opening, and that if he could correct these few obvious problems, both on Julia’s copy of the script and, more important, on the copy he’d specially laser-printed on 24-pound ivory bond paper for [the film producer] Eden Procuro, there might be hope not only for his finances but also for his chances of ever again unfettering and fondling Julia’s own guileless, milk-white breasts. Which by this point in the day, as by late morning of almost every day in recent months, was one of the last activities on earth in which he could still reasonably expect to take solace for his failures. (28)
If you’re reading a literary work like The Corrections, chances are you’ve at some point sat in a literature class—or even a sociology or culture studies class—and been instructed that the proper way to fulfill your function as a reader is to critically assess the work in terms of how women (or minorities) are portrayed. Both Chip and Julia have sat through such classes. And you’re encouraged to express disapproval, even outrage if something like a traditional role is enacted—or, gasp, objectification occurs. Blair explains how this affects male novelists:
When you see the loser-figure in a novel, what you are seeing is a complicated bargain that goes something like this: yes, it is kind of immature and boorish to be thinking about sex all the time and ogling and objectifying women, but this is what we men sometimes do and we have to write about it. We fervently promise, however, to avoid the mistake of the late Updike novels: we will always, always, call our characters out when they’re being self-absorbed jerks and louts. We will make them comically pathetic, and punish them for their infractions a priori by making them undesirable to women, thus anticipating what we imagine will be your judgments, female reader. Then you and I, female reader, can share a laugh at the characters’ expense, and this will bring us closer together and forestall the dreaded possibility of your leaving me.
In other words, these male authors are the grownup versions of those poor school boys Lessing saw forced to apologize for their own existence. Indeed, you can feel this dynamic, this bargain, playing out when you’re reading these guys’ books. Blair’s description of the problem is spot on. Her theory of what caused it, however, is laughable.
Because of the GMNs, these two tendencies—heroic virility and sexist condescension—have lingered in our minds as somehow yoked together, and the succeeding generations of American male novelists have to some degree accepted the dyad as truth. Behind their skittishness is a fearful suspicion that if a man gets what he wants, sexually speaking, he is probably exploiting someone.
The dread of slipping down the slope from attraction to exploitation has nothing to do with John Updike. Rather, it is embedded in terms at the very core of feminist ideology. Misogyny, for instance, is frequently deemed an appropriate label for men who indulge in lustful gazing, even in private. And the term objectification implies that the female whose subjectivity isn’t being properly revered is the victim of oppression. The main problem with this idea—and there are several—is that the term objectification is synonymous with attraction. The deluge of details about the female body in fiction by male authors can just as easily be seen as a type of confession, an unburdening of guilt by the offering up of sins. The female readers respond by assigning the writers some form of penance, like never writing, never thinking like that again without flagellating themselves.
The conflict between healthy male desire and disapproving feminist prudery doesn’t just play out in the tortured psyches of geeky American male novelists. A.S. Byatt, in her Booker prize-winning novel Possession, satirizes the plight of scholars steeped in literary theories for being “papery” and sterile. But the novel ends with a male scholar named Roland overcoming his theory-induced self-consciousness to initiate sex with another scholar named Maud. Byatt describes the encounter:
And very slowly and with infinite gentle delays and delicate diversions and variations of indirect assault Roland finally, to use an outdated phrase, entered and took possession of all her white coolness that grew warm against him, so that there seemed to be no boundaries, and he heard, towards dawn, from a long way off, her clear voice crying out, uninhibited, unashamed, in pleasure and triumph. (551)
The literary critic Monica Flegel cites this passage as an example of how Byatt’s old-fashioned novel features “such negative qualities of the form as its misogyny and its omission of the lower class.” Flegel is particularly appalled by how “stereotypical gender roles are reaffirmed” in the sex scene. “Maud is reduced in the end,” Flegel alleges, “to being taken possession of by her lover…and assured that Roland will ‘take care of her.’” How, we may wonder, did a man assuring a woman he would take care of her become an act of misogyny?
Perhaps critics like Flegel occupy some radical fringe; Byatt’s book was after all a huge success with audiences and critics alike, and it did win Byatt the Booker. The novelist Martin Amis, however, isn’t one to describe his assaults as indirect. He routinely dares to feature men who actually do treat women poorly in his novels—without any authorial condemnation.
Martin Goff, the non-intervening director of the Booker Prize committee, tells the story of the 1989 controversy over whether or not Amis’s London Fields should be on the shortlist. Maggie Gee, a novelist, and Helen McNeil, a professor, simply couldn’t abide Amis’s treatment of his women characters. “It was an incredible row,” says Goff.
Maggie and Helen felt that Amis treated women appallingly in the book. That is not to say they thought books which treated women badly couldn't be good, they simply felt that the author should make it clear he didn't favour or bless that sort of treatment. Really, there was only two of them and they should have been outnumbered as the other three were in agreement, but such was the sheer force of their argument and passion that they won. David [Lodge] has told me he regrets it to this day, he feels he failed somehow by not saying, “It's two against three, Martin's on the list”.
In 2010, Amis explained his career-spanning failure to win a major literary award, despite enjoying robust book sales, thus:
There was a great fashion in the last century, and it's still with us, of the unenjoyable novel. And these are the novels which win prizes, because the committee thinks, “Well it's not at all enjoyable, and it isn't funny, therefore it must be very serious.”
Brits like Hilary Mantel, and especially Ian McEwan are working to turn this dreadful trend around. But when McEwan dared to write a novel about a neurosurgeon who prevails in the end over an afflicted, less privileged tormenter he was condemned by critic Jennifer Szalai in the pages of Harper’s Magazine for his “blithe, bourgeois sentiments.” If you’ve read Saturday, you know the sentiments are anything but blithe, and if you read Szalai’s review you’ll be taken aback by her articulate blindness.
Amis is probably right in suggesting that critics and award committees have a tendency to mistake misery for profundity. But his own case, along with several others like it, hint at something even more disturbing, a shift in the very idea of what role fictional narratives play in our lives.
The sad new reality is that, owing to the growing influence of ideologically extreme and idiotically self-righteous activist professors, literature is no longer read for pleasure and enrichment—it’s no longer even read as a challenging exercise in outgroup empathy. Instead, reading literature is supposed by many to be a ritual of male western penance. Prior to taking an interest in literary fiction, you must first be converted to the proper ideologies, made to feel sufficiently undeserving yet privileged, the beneficiary of a long history of theft and population displacement, the scion and gene-carrier of rapists and genocidaires—the horror, the horror. And you must be taught to systematically overlook and remain woefully oblivious of all the evidence that the Enlightenment was the best fucking thing that ever happened to the human species. Once you’re brainwashed into believing that so-called western culture is evil and that you’ve committed the original sin of having been born into it, you’re ready to perform your acts of contrition by reading horrendously boring fiction that forces you to acknowledge and reflect upon your own fallen state.
Fittingly, the apotheosis of this new literary tradition won the Booker in 1999, and its author, like Lessing, is a Nobel laureate. J.M. Coetzee’s Disgrace chronicles in exquisite free indirect discourse the degradation of David Lurie, a white professor in Cape Town, South Africa, beginning with his somewhat pathetic seduction of black student, a crime for which he pays with the loss of his job, his pension, and his reputation, and moving on to the aftermath of his daughter’s rape at the hands of three black men who proceed to rob her, steal his car, douse him with spirits and light him on fire. What’s unsettling about the novel—and it is a profoundly unsettling novel—is that its structure implies that everything that David and Lucy suffer flows from his original offense of lusting after a young black woman. This woman, Melanie, is twenty years old, and though she is clearly reluctant at first to have sex with her teacher there’s never any force involved. At one point, she shows up at David’s house and asks to stay with him. It turns out she has a boyfriend who is refusing to let her leave him without a fight. It’s only after David unheroically tries to wash his hands of the affair to avoid further harassment from this boyfriend—while stooping so low as to insist that Melanie make up a test she missed in his class—that she files a complaint against him.
David immediately comes clean to university officials and admits to taking advantage of his position of authority. But he stalwartly refuses to apologize for his lust, or even for his seduction of the young woman. This refusal makes him complicit, the novel suggests, in all the atrocities of colonialism. As he’s awaiting a hearing to address Melanie’s complaint, David gets a message:
On campus it is Rape Awareness Week. Women Against Rape, WAR, announces a twenty-four-hour vigil in solidarity with “recent victims”. A pamphlet is slipped under his door: ‘WOMEN SPEAK OUT.’ Scrawled in pencil at the bottom is a message: ‘YOUR DAYS ARE OVER, CASANOVA.’ (43)
During the hearing, David confesses to doctoring the attendance ledgers and entering a false grade for Melanie. As the attendees become increasingly frustrated with what they take to be evasions, he goes on to confess to becoming “a servant of Eros” (52). But this confession only enrages the social sciences professor Farodia Rassool:
Yes, he says, he is guilty; but when we try to get specificity, all of a sudden it is not abuse of a young woman he is confessing to, just an impulse he could not resist, with no mention of the pain he has caused, no mention of the long history of exploitation of which this is part. (53)
There’s also no mention, of course, of the fact that already David has gone through more suffering than Melanie has, or that her boyfriend deserves a great deal of the blame, or that David is an individual, not a representative of his entire race who should be made to answer for the sins of his forefathers.
After resigning from his position in disgrace, David moves out to the country to live with his daughter on a small plot of land. The attack occurs only days after he’s arrived. David wants Lucy to pursue some sort of justice, but she refuses. He wants her to move away because she’s clearly not safe, but she refuses. She even goes so far as to accuse him of being in the wrong for believing he has any right to pronounce what happened an injustice—and for thinking it is his place to protect his daughter. And if there’s any doubt about the implication of David’s complicity she clears it up. As he’s pleading with her to move away, they begin talking about the rapists’ motivation. Lucy says to her father,
When it comes to men and sex, David, nothing surprises me anymore. Maybe, for men, hating the woman makes sex more exciting. You are a man, you ought to know. When you have sex with someone strange—when you trap her, hold her down, get her under you, put all your weight on her—isn’t it a bit like killing? Pushing the knife in; exiting afterwards, leaving the body behind covered in blood—doesn’t it feel like murder, like getting away with murder? (158)
The novel is so engrossing and so disturbing that it’s difficult to tell what the author’s position is vis à vis his protagonist’s degradation or complicity. You can’t help sympathizing with him and feeling his treatment at the hands of Melanie, Farodia, and Lucy is an injustice. But are you supposed to question that feeling in light of the violence Melanie is threatened with and Lucy is subjected to? Are you supposed to reappraise altogether your thinking about the very concept of justice in light of the atrocities of history? Are we to see David Lurie as an individual or as a representative of western male colonialism, deserving of whatever he’s made to suffer and more?
Personally, I think David Lurie’s position in Disgrace is similar to that of John Proctor in The Crucible (although this doesn’t come out nearly as much in the movie version). And it’s hard not to see feminism in its current manifestations—along with Marxism and postcolonialism—as a pernicious new breed of McCarthyism infecting academia and wreaking havoc with men and literature alike. It’s really no surprise at all that the most significant developments in the realm of narratives lately haven’t occurred in novels at all. Insofar as the cable series contributing to the new golden age of television can be said to adhere to a formula, it’s this: begin with a bad ass male lead who doesn’t apologize for his own existence and has no qualms about expressing his feelings toward women. As far as I know, these shows are just as popular with women viewers as they are with the guys.
When David first arrives at Lucy’s house, they take a walk and he tells her a story about a dog he remembers from a time when they lived in a neighborhood called Kenilworth.
It was a male. Whenever there was a bitch in the vicinity it would get excited and unmanageable, and with Pavlovian regularity the owners would beat it. This went on until the poor dog didn’t know what to do. At the smell of a bitch it would chase around the garden with its ears flat and its tail between its legs, whining, trying to hide…There was something so ignoble in the spectacle that I despaired. One can punish a dog, it seems to me, for an offence like chewing a slipper. A dog will accept the justice of that: a beating for a chewing. But desire is another story. No animal will accept the justice of being punished for following its instincts.
Lucy breaks in, “So males must be allowed to follow their instincts unchecked? Is that the moral?” David answers,
No, that is not the moral. What was ignoble about the Kenilworth spectacle was that the poor dog had begun to hate its own nature. It no longer needed to be beaten. It was ready to punish itself. At that point it would be better to shoot it.
“Or have it fixed,” Lucy offers. (90)
Also Read:
SABBATH SAYS: PHILIP ROTH AND THE DILEMMAS OF IDEOLOGICAL CASTRATION
And:
FROM DARWIN TO DR. SEUSS: DOUBLING DOWN ON THE DUMBEST APPROACH TO COMBATTING RACISM
The Storytelling Animal: a Light Read with Weighty Implications
The Storytelling Animal is not groundbreaking. But the style of the book contributes something both surprising and important. Gottschall could simply tell his readers that stories almost invariably feature come kind of conflict or trouble and then present evidence to support the assertion. Instead, he takes us on a tour from children’s highly gendered, highly trouble-laden play scenarios, through an examination of the most common themes enacted in dreams, through some thought experiments on how intensely boring so-called hyperrealism, or the rendering of real life as it actually occurs, in fiction would be. The effect is that we actually feel how odd it is to devote so much of our lives to obsessing over anxiety-inducing fantasies fraught with looming catastrophe.
A review of Jonathan Gottschall's The Storytelling Animal: How Stories Make Us Human
Vivian Paley, like many other preschool and kindergarten teachers in the 1970s, was disturbed by how her young charges always separated themselves by gender at playtime. She was further disturbed by how closely the play of each gender group hewed to the old stereotypes about girls and boys. Unlike most other teachers, though, Paley tried to do something about it. Her 1984 book Boys and Girls: Superheroes in the Doll Corner demonstrates in microcosm how quixotic social reforms inspired by the assumption that all behaviors are shaped solely by upbringing and culture can be. Eventually, Paley realized that it wasn’t the children who needed to learn new ways of thinking and behaving, but herself. What happened in her classrooms in the late 70s, developmental psychologists have reliably determined, is the same thing that happens when you put kids together anywhere in the world. As Jonathan Gottschall explains,
Dozens of studies across five decades and a multitude of cultures have found essentially what Paley found in her Midwestern classroom: boys and girls spontaneously segregate themselves by sex; boys engage in more rough-and-tumble play; fantasy play is more frequent in girls, more sophisticated, and more focused on pretend parenting; boys are generally more aggressive and less nurturing than girls, with the differences being present and measurable by the seventeenth month of life. (39)
Paley’s study is one of several you probably wouldn’t expect to find discussed in a book about our human fascination with storytelling. But, as Gottschall makes clear in The Storytelling Animal: How Stories Make Us Human, there really aren’t many areas of human existence that aren’t relevant to a discussion of the role stories play in our lives. Those rowdy boys in Paley’s classes were playing recognizable characters from current action and sci-fi movies, and the fantasies of the girls were right out of Grimm’s fairy tales (it’s easy to see why people might assume these cultural staples were to blame for the sex differences). And the play itself was structured around one of the key ingredients—really the key ingredient—of any compelling story, trouble, whether in the form of invading pirates or people trying to poison babies.
The Storytelling Animal is the book to start with if you have yet to cut your teeth on any of the other recent efforts to bring the study of narrative into the realm of cognitive and evolutionary psychology. Gottschall covers many of the central themes of this burgeoning field without getting into the weedier territories of game theory or selection at multiple levels. While readers accustomed to more technical works may balk at wading through all the author’s anecdotes about his daughters, Gottschall’s keen sense of measure and the light touch of his prose keep the book from getting bogged down in frivolousness. This applies as well to the sections in which he succumbs to the temptation any writer faces when trying to explain one or another aspect of storytelling by making a few forays into penning abortive, experimental plots of his own.
None of the central theses of The Storytelling Animal is groundbreaking. But the style and layout of the book contribute something both surprising and important. Gottschall could simply tell his readers that stories almost invariably feature come kind of conflict or trouble and then present evidence to support the assertion, the way most science books do. Instead, he takes us on a tour from children’s highly gendered, highly trouble-laden play scenarios, through an examination of the most common themes enacted in dreams—which contra Freud are seldom centered on wish-fulfillment—through some thought experiments on how intensely boring so-called hyperrealism, or the rendering of real life as it actually occurs, in fiction would be (or actually is, if you’ve read any of D.F.Wallace’s last novel about an IRS clerk). The effect is that instead of simply having a new idea to toss around we actually feel how odd it is to devote so much of our lives to obsessing over anxiety-inducing fantasies fraught with looming catastrophe. And we appreciate just how integral story is to almost everything we do.
This gloss of Gottschall’s approach gives a sense of what is truly original about The Storytelling Animal—it doesn’t seal off narrative as discrete from other features of human existence but rather shows how stories permeate every aspect of our lives, from our dreams to our plans for the future, even our sense of our own identity. In a chapter titled “Life Stories,” Gottschall writes,
This need to see ourselves as the striving heroes of our own epics warps our sense of self. After all, it’s not easy to be a plausible protagonist. Fiction protagonists tend to be young, attractive, smart, and brave—all of the things that most of us aren’t. Fiction protagonists usually live interesting lives that are marked by intense conflict and drama. We don’t. Average Americans work retail or cubicle jobs and spend their nights watching protagonists do interesting things on television, while they eat pork rinds dipped in Miracle Whip. (171)
If you find this observation a tad unsettling, imagine it situated on a page underneath a mug shot of John Wayne Gacy with a caption explaining how he thought of himself “more as a victim than as a perpetrator.” For the most part, though, stories follow an easily identifiable moral logic, which Gottschall demonstrates with a short plot of his own based on the hypothetical situations Jonathan Haidt designed to induce moral dumbfounding. This almost inviolable moral underpinning of narratives suggests to Gottschall that one of the functions of stories is to encourage a sense of shared values and concern for the wider community, a role similar to the one D.S. Wilson sees religion as having played, and continuing to play in human evolution.
Though Gottschall stays away from the inside baseball stuff for the most part, he does come down firmly on one issue in opposition to at least one of the leading lights of the field. Gottschall imagines a future “exodus” from the real world into virtual story realms that are much closer to the holodecks of Star Trek than to current World of Warcraft interfaces. The assumption here is that people’s emotional involvement with stories results from audience members imagining themselves to be the protagonist. But interactive videogames are probably much closer to actual wish-fulfillment than the more passive approaches to attending to a story—hence the god-like powers and grandiose speechifying.
William Flesch challenges the identification theory in his own (much more technical) book Comeuppance. He points out that films that have experimented with a first-person approach to camera work failed to capture audiences (think of the complicated contraption that filmed Will Smith’s face as he was running from the zombies in I am Legend). Flesch writes, “If I imagined I were a character, I could not see her face; thus seeing her face means I must have a perspective on her that prevents perfect (naïve) identification” (16). One of the ways we sympathize with one another, though, is to mirror them—to feel, at least to some degree, their pain. That makes the issue a complicated one. Flesch believes our emotional involvement comes not from identification but from a desire to see virtuous characters come through the troubles of the plot unharmed, vindicated, maybe even rewarded. Attending to a story therefore entails tracking characters' interactions to see if they are in fact virtuous, then hoping desperately to see their virtue rewarded.
Gottschall does his best to avoid dismissing the typical obsessive Larper (live-action role player) as the “stereotypical Dungeons and Dragons player” who “is a pimply, introverted boy who isn’t cool and can’t play sports or attract girls” (190). And he does his best to end his book on an optimistic note. But the exodus he writes about may be an example of another phenomenon he discusses. First the optimism:
Humans evolved to crave story. This craving has, on the whole, been a good thing for us. Stories give us pleasure and instruction. They simulate worlds so we can live better in this one. They help bind us into communities and define us as cultures. Stories have been a great boon to our species. (197)
But he then makes an analogy with food cravings, which likewise evolved to serve a beneficial function yet in the modern world are wreaking havoc with our health. Just as there is junk food, so there is such a thing as “junk story,” possibly leading to what Brian Boyd, another luminary in evolutionary criticism, calls a “mental diabetes epidemic” (198). In the context of America’s current education woes, and with how easy it is to conjure images of glazy-eyed zombie students, the idea that video games and shows like Jersey Shore are “the story equivalent of deep-fried Twinkies” (197) makes an unnerving amount of sense.
Here, as in the section on how our personal histories are more fictionalized rewritings than accurate recordings, Gottschall manages to achieve something the playful tone and off-handed experimentation don't prepare you for. The surprising accomplishment of this unassuming little book (200 pages) is that it never stops being a light read even as it takes on discoveries with extremely weighty implications. The temptation to eat deep-fried Twinkies is only going to get more powerful as story-delivery systems become more technologically advanced. Might we have already begun the zombie apocalypse without anyone noticing—and, if so, are there already heroes working to save us we won’t recognize until long after the struggle has ended and we’ve begun weaving its history into a workable narrative, a legend?
Also read:
WHAT IS A STORY? AND WHAT ARE YOU SUPPOSED TO DO WITH ONE?
And:
HOW TO GET KIDS TO READ LITERATURE WITHOUT MAKING THEM HATE IT
Madness and Bliss: Critical vs Primitive Readings in A.S. Byatt's Possession: A Romance 2
Critics responded to A.S. Byatt’s challenge to their theories in her book Possession by insisting that the work fails to achieve high-brow status and fails to do anything but bolster old-fashioned notions about stories and language—not to mention the roles of men and women. What they don’t realize is how silly their theories come across to the non-indoctrinated.
Read part one.
The critical responses to the challenges posed by Byatt in Possession fit neatly within the novel’s satire. Louise Yelin, for instance, unselfconsciously divides the audience for the novel into “middlebrow readers” and “the culturally literate” (38), placing herself in the latter category. She overlooks Byatt’s challenge to her methods of criticism and the ideologies underpinning them, for the most part, and suggests that several of the themes, like ventriloquism, actually support poststructuralist philosophy. Still, Yelin worries about the novel’s “homophobic implications” (39). (A lesbian, formerly straight character takes up with a man in the end, and Christabel LaMotte’s female lover commits suicide after the dissolution of their relationship, but no one actually expresses any fear or hatred of homosexuals.) Yelin then takes it upon herself to “suggest directions that our work might take” while avoiding the “critical wilderness” Byatt identifies. She proposes a critical approach to a novel that “exposes its dependencies on the bourgeois, patriarchal, and colonial economies that underwrite” it (40). And since all fiction fails to give voice to one or another oppressed minority, it is the critic’s responsibility to “expose the complicity of those effacements in the larger order that they simultaneously distort and reproduce” (41). This is not in fact a response to Byatt’s undermining of critical theories; it is instead an uncritical reassertion of their importance.
Yelin and several other critics respond to Possession as if Byatt had suggested that “culturally literate” readers should momentarily push to the back of their minds what they know about how literature is complicit in various forms of political oppression so they can get more enjoyment from their reading. This response is symptomatic of an astonishing inability to even imagine what the novel is really inviting literary scholars to imagine—that the theories implicating literature are flat-out wrong. Monica Flegel for instance writes that “What must be privileged and what must be sacrificed in order for Byatt’s Edenic reading (and living) state to be achieved may give some indication of Byatt’s own conventionalizing agenda, and the negative enchantment that her particular fairy tale offers” (414). Flegel goes on to show that she does in fact appreciate the satire on academic critics; she even sympathizes with the nostalgia for simpler times, before political readings became mandatory. But she ends her critical essay with another reassertion of the political, accusing Byatt of “replicating” through her old-fashioned novel “such negative qualities of the form as its misogyny and its omission of the lower class.” Flegel is particularly appalled by Maud’s treatment in the final scene, since, she claims, “stereotypical gender roles are reaffirmed” (428). “Maud is reduced in the end,” Flegel alleges, “to being taken possession of by her lover…and assured that Roland will ‘take care of her’” (429). This interpretation places Flegel in the company of the feminists in the novel who hiss at Maud for trying to please men, forcing her to bind her head.
Flegel believes that her analysis proves Byatt is guilty of misogyny and mistreatment of the poor. “Byatt urges us to leave behind critical readings and embrace reading for enjoyment,” she warns her fellow critics, “but the narrative she offers shows just how much is at stake when we leave criticism behind” (429). Flegel quotes Yelin to the effect that Possession is “seductive,” and goes on to declaim that
it is naïve, and unethical, to see the kind of reading that Byatt offers as happy. To return to an Edenic state of reading, we must first believe that such a state truly existed and that it was always open to all readers of every class, gender, and race. Obviously, such a belief cannot be held, not because we have lost the ability to believe, but because such a space never really did exist. (430)
In her preening self-righteous zealotry, Flegel represents a current in modern criticism that’s only slightly more extreme than that represented by Byatt’s misguided but generally harmless scholars. The step from using dubious theories to decode alleged justifications for political oppression in literature to Flegel’s frightening brand of absurd condemnatory moralizing leveled at authors and readers alike is a short one.
Another way critics have attempted to respond to Byatt’s challenge is by denying that she is in fact making any such challenge. Christien Franken suggests that Byatt’s problems with theories like poststructuralism stem from her dual identity as a critic and an author. In a lecture Byatt once gave titled “Identity and the Writer” which was later published as an essay, Franken finds what she believes is evidence of poststructuralist thinking, even though Byatt denies taking the theory seriously. Franken believes that in the essay, “the affinity between post-structuralism and her own thought on authorship is affirmed and then again denied” (18). Her explanation is that the critic in A.S. Byatt begins her lecture “Identity and the Writer” with a recognition of her own intellectual affinity with post-structuralist theories which criticize the paramount importance of “the author.”
The writer in Byatt feels threatened by the same post-structuralist criticism. (17)
Franken claims that this ambivalence runs throughout all of Byatt’s fiction and criticism. But Ann Marie Adams disagrees, writing that “When Byatt does delve into poststructuralist theory in this essay, she does so only to articulate what ‘threatens’ and ‘beleaguers’ her as a writer, not to productively help her identify the true ‘identity’ of the writer” (349). In Adams view, Byatt belongs in the humanist tradition of criticism going back to Matthew Arnold and the romantics. In her own response to Byatt, Adams manages to come closer than any of her fellow critics to being able to imagine that the ascendant literary theories are simply wrong. But her obvious admiration for Byatt doesn’t prevent her from suggesting that “Yelin and Flegel are right to note that the conclusion of Possession, with its focus on closure and seeming transcendence of critical anxiety, affords a particularly ‘seductive’ and ideologically laden pleasure to academic readers” (120). And, while she seems to find some value in Arnoldian approaches, she fails to engage in any serious reassessment of the theories Byatt targets.
Frederick Holmes, in his attempt to explain Byatt’s attitude toward history as evidenced by the novel, agrees with the critics who see in Possession clear signs of the author’s embrace of postmodernism in spite of the parody and explicit disavowals. “It is important to acknowledge,” he writes,
that the liberation provided by Roland’s imagination from the previously discussed sterility of his intellectual sophistication is never satisfactorily accounted for in rational terms. It is not clear how he overcomes the post-structuralist positions on language, authorship, and identity. His claim that some signifiers are concretely attached to signifieds is simply asserted, not argued for. (330)
While Holmes is probably mistaken in taking this absence of rational justification as a tacit endorsement of the abandoned theory, the observation is the nearest any of the critics comes to a rebuttal of Byatt’s challenge. What Holmes is forgetting, though, is that structuralist and poststructuralist theorists themselves, from Saussure through Derrida, have been insisting on the inadequacy of language to describe the real world, a radical idea that flies in the face of every human’s lived experience, without ever providing any rational, empirical, or even coherent support for the departure.
The stark irony to which Holmes is completely oblivious is that he’s asking for a rational justification to abandon a theory that proclaims such justification impossible. The burden remains on poststructuralists to prove that people shouldn’t trust their own experiences with language. And it is precisely this disconnect with experience that Byatt shows to be problematic.
Holmes, like the other critics, simply can’t imagine that critical theories have absolutely no validity, so he’s forced to read into the novel the same chimerical ambivalence Franken tries so desperately to prove.
Roland’s dramatic alteration is validated by the very sort of emotional or existential experience that critical theory has conditioned him to dismiss as insubstantial. We might account for Roland’s shift by positing, not a rejection of his earlier thinking, but a recognition that his psychological well-being depends on his living as if such powerful emotional experiences had an unquestioned reality. (330)
Adams quotes from an interview in which Byatt discusses her inspiration for the characters in Possession, saying, “poor moderns are always asking themselves so many questions about whether their actions are real and whether what they say can be thought to be true […] that they become rather papery and are miserably aware of this” (111-2). Byatt believes this type of self-doubt is unnecessary. Indeed, Maud’s notion that “there isn’t a unitary ego” (290) and Roland’s thinking of the “idea of his ‘self’ as an illusion” (459)—not to mention Holmes’s conviction that emotional experiences are somehow unreal—are simple examples of the reductionist fallacy. While it is true that an individual’s consciousness and sense of self rest on a substrate of unconscious mental processes and mechanics that can be traced all the way down to the firing of neurons, to suggest this mechanical accounting somehow delegitimizes selfhood is akin to saying that water being made up of hydrogen and oxygen atoms means the feeling of wetness can only be an illusion.
Just as silly are the ideas that romantic love is a “suspect ideological construct” (290), as Maud calls it, and that “the expectations of Romance control almost everyone in the Western world” (460), as Roland suggests. Anthropologist Helen Fisher writes in her book Anatomy of Love, “some Westerners have come to believe that romantic love is an invention of the troubadours… I find this preposterous. Romantic love is far more widespread” (49). After a long list of examples of love-strickenness from all over the world from west to east to everywhere in-between, Fisher concludes that it “must be a universal human trait” (50). Scientists have found empirical support as well for Roland’s discovery that words can in fact refer to real things. Psychologist Nicole Speer and her colleagues used fMRI to scan people’s brains as they read stories. The actions and descriptions on the page activated the same parts of the brain as witnessing or perceiving their counterparts in reality. The researchers report, “Different brain regions track different aspects of a story, such as a character’s physical location or current goals. Some of these regions mirror those involved when people perform, imagine, or observe similar real-world activities” (989).
Critics like Flegel insist on joyless reading because happy endings necessarily overlook the injustices of the world. But this is like saying anyone who savors a meal is complicit in world hunger (or for that matter anyone who enjoys reading about a character savoring a meal). If feminist poststructuralists were right about how language functions as a vehicle for oppressive ideologies, then the most literate societies would be the most oppressive, instead of the other way around. Jacques Lacan is the theorist Byatt has the most fun with in Possession—and he is also the main target of the book Fashionable Nonsense:Postmodern Intellectuals’ Abuse of Science by the scientists Alan Sokal and Jean Bricmont. “According to his disciples,” they write, Lacan “revolutionized the theory and practice of psychoanalysis; according to his critics, he is a charlatan and his writings are pure verbiage” (18). After assessing Lacan’s use of concepts in topological mathematics, like the Mobius strip, which he sets up as analogies for various aspects of the human psyche, Sokal and Bricmont conclude that Lacan’s ideas are complete nonsense. They write,
The most striking aspect of Lacan and his disciples is probably their attitude toward science, and the extreme privilege they accord to “theory”… at the expense of observations and experiments… Before launching into vast theoretical generalizations, it might be prudent to check the empirical adequacy of at least some of its propositions. But, in Lacan’s writings, one finds mainly quotations and analyses of texts and concepts. (37)
Sokal and Bricmont wonder if the abuses of theorists like Lacan “arise from conscious fraud, self-deception, or perhaps a combination of the two” (6). The question resonates with the poem Randolph Henry Ash wrote about his experience exposing a supposed spiritualist as a fraud, in which he has a mentor assure her protégée, a fledgling spiritualist with qualms about engaging in deception, “All Mages have been tricksters” (444).
There’s even some evidence that Byatt is right about postmodern thinking making academics into “papery” people. In a 2006 lecture titled “The Inhumanity of the Humanities,” William van Peer reports on research he conducted with a former student comparing the emotional intelligence of students in the humanities to students in the natural sciences.
Although the psychologists Raymond Mar and Keith Oately (407) have demonstrated that reading fiction increases empathy and emotional intelligence, van Peer found that humanities students had no advantage in emotional intelligence over students of natural science. In fact, there was a weak trend in the opposite direction—this despite the fact that the humanities students were reading more fiction. Van Peer attributes the deficit to common academic approaches to literature:
Consider the ills flowing from postmodern approaches, the “posthuman”: this usually involves the hegemony of “race/class/gender” in which literary texts are treated with suspicion. Here is a major source of that loss of emotional connection between student and literature. How can one expect a certain humanity to grow in students if they are continuously instructed to distrust authors and texts? (8)
Whether it derives from her early reading of Arnold and his successors, as Adams suggests, or simply from her own artistic and readerly sensibilities, Byatt has an intense desire to revive that very humanity so many academics sacrifice on the altar of postmodern theory. Critical theory urges students to assume that any discussion of humanity, or universal traits, or human nature can only be exclusionary, oppressive, and, in Flegel’s words, “naïve” and “unethical.” The cognitive linguist Steven Pinker devotes his book The Blank Slate: The Modern Denial of Human Nature to debunking the radical cultural and linguistic determinism that is the foundation of modern literary theory. In a section on the arts, Pinker credits Byatt for playing a leading role in what he characterizes as a “revolt”:
Museum-goers have become bored with the umpteenth exhibit on the female body featuring dismembered torsos or hundreds of pounds of lard chewed up and spat out by the artist. Graduate students in the humanities are grumbling in emails and conference hallways about being locked out of the job market unless they write in gibberish while randomly dropping the names of authorities like Foucault and Butler. Maverick scholars are doffing the blinders that prevented them from looking at exciting developments in the sciences of human nature. And younger artists are wondering how the art world got itself into the bizarre place in which beauty is a dirty word. (Pinker 416)
There are resonances with Roland Mitchell’s complaints about how psychoanalysis transforms perceptions of landscapes in Pinker’s characterization of modern art. And the idea that beauty has become a dirty word is underscored by the critical condemnations of Byatt’s “fairy tale ending.” Pinker goes on to quote Byatt’s response in New York Times Magazine to the question of what the best story ever told was.
Her answer—The Arabian Nights:
The stories in “The Thousand and One Nights”… are about storytelling without ever ceasing to be stories about love and life and death and money and food and other human necessities. Narration is as much a part of human nature as breath and the circulation of the blood. Modernist literature tried to do away with storytelling, which it thought vulgar, replacing it with flashbacks, epiphanies, streams of consciousness. But storytelling is intrinsic to biological time, which we cannot escape. Life, Pascal said, is like living in a prison from which every day fellow prisoners are taken away to be executed. We are all, like Scheherazade, under sentences of death, and we all think of our lives as narratives, with beginnings, middles, and ends. (quoted in Pinker 419)
Byatt’s satire of papery scholars and her portrayals of her characters’ transcendence of nonsensical theories are but the simplest and most direct ways she celebrates the power of language to transport readers—and the power of stories to possess them. Though she incorporates an array of diverse genres, from letters to poems to diaries, and though some of the excerpts’ meanings subtly change in light of discoveries about their authors’ histories, all these disparate parts nonetheless “hook together,” collaborating in the telling of this magnificent tale. This cooperation would be impossible if the postmodern truism about the medium being the message were actually true. Meanwhile, the novel’s intimate engagement with the mythologies of wide-ranging cultures thoroughly undermines the paradigm according to which myths are deterministic “repeating patterns” imposed on individuals, showing instead that these stories simultaneously emerge from and lend meaning to our common human experiences. As the critical responses to Possession make abundantly clear, current literary theories are completely inadequate in any attempt to arrive at an understanding of Byatt’s work. While new theories may be better suited to the task, it is incumbent on us to put forth a good faith effort to imagine the possibility that true appreciation of this and other works of literature will come only after we’ve done away with theory altogether.
Madness and Bliss: Critical versus Primitive Readings in A.S. Byatt’s Possession: a Romance
Possession can be read as the novelist’s narrative challenge to the ascendant critical theories, an “undermining of facile illusions” about language and culture and politics—a literary refutation of current approaches to literary criticism.
Part 1 of 2
“You have one of the gifts of the novelist at least,” Christabel LaMotte says to her cousin Sabine de Kercoz in A.S. Byatt’s Possession: a Romance, “you persist in undermining facile illusions” (377). LaMotte is staying with her uncle and cousin, Sabine later learns, because she is carrying the child of the renowned, and married, poet Randolph Henry Ash. The affair began when the two met at a breakfast party where they struck up an impassioned conversation that later prompted Ash to instigate a correspondence. LaMotte too was a poet, so each turned out to be an ideal reader for the other’s work. Just over a hundred years after this initial meeting, in the present day of Byatt’s narrative, the literary scholar Roland Mitchell finds two drafts of Ash’s first letter to LaMotte tucked away in the pages of a book he’s examining for evidence about the great poet’s life, and the detective work begins.
Roland, an unpaid research assistant financially dependent on the girlfriend he’s in a mutually unfulfilling relationship with, is overtaken with curiosity and embarks on a quest to piece together the story of what happened between LaMotte and Ash. Knowing next to nothing about LaMotte, Mitchell partners with the feminist scholar Maud Bailey, who one character describes as “a chilly mortal” (159), and a stilted romance develops between them as they seek out the clues to the earlier, doomed relationship. Through her juxtaposition of the romance between the intensely passionate, intensely curious nineteenth century couple and the subdued, hyper-analytic, and sterile modern one, the novelist Byatt does some undermining of facile illusions of her own.
Both of the modern characters are steeped in literary theory, but Byatt’s narrative suggests that their education and training is more a hindrance than an aid to true engagement with literature, and with life. It is only by breaking with professional protocol—by stealing the drafts of the letter from Ash to LaMotte—and breaking away from his mentor and fellow researchers that Roland has a chance to read, and experience, the story that transforms him. “He had been taught that language was essentially inadequate, that it could never speak what was there, that it only spoke itself” (513). But over the course of the story Roland comes to believe that this central tenet of poststructuralism is itself inadequate, along with the main tenets of other leading critical theories, including psychoanalysis. Byatt, in a later book of criticism, counts herself among the writers of fiction who “feel that powerful figures in the modern critical movements feel almost a gladiatorial antagonism to the author and the authority the author claims” (6).
Indeed, Possession can be read as the novelist’s narrative challenge to the ascendant critical theories, an “undermining of facile illusions” about language and culture and politics—a literary refutation of current approaches to literary criticism. In the two decades since the novel’s publication, critics working in these traditions have been unable to adequately respond to Byatt’s challenge because they’ve been unable to imagine that their ideas are not simply impediments to pleasurable reading but that they’re both wrong and harmful to the creation and appreciation of literature.
The possession of the title refers initially to how the story of LaMotte and Ash’s romance takes over Maud and Roland—in defiance of the supposed inadequacy of language. If words only speak themselves, then true communication would be impossible. But, as Roland says to Maud after they’ve discovered some uncanny correspondences between each of the two great poets’ works and the physical setting the modern scholars deduce they must’ve visited together, “People’s minds do hook together” (257). This hooking-together is precisely what inspires them to embark on their mission of discovery in the first place. “I want to—to—follow the path,” Maud says to Roland after they’ve read the poets’ correspondence together.
I feel taken over by this. I want to know what happened, and I want it to be me that finds out. I thought you were mad when you came to Lincoln with your piece of stolen letter.
Now I feel the same. It isn’t professional greed. It’s something more primitive. (239)
Roland interrupts to propose the label “Narrative curiosity” for her feeling of being taken over, to which she responds, “Partly” (239). Later in the story, after several more crucial discoveries, Maud proposes revealing all they’ve learned to their academic colleagues and returning to their homes and their lives. Roland worries doing so would mean going back “Unenchanted.” “Are we enchanted?” Maud replies. “I suppose we must start thinking again, sometime” (454). But it’s the primitive, enchanted, supposedly unthinking reading of the biographical clues about the poets that has brought the two scholars to where they are, and their journey ends up resulting in a transformation that allows Maud and Roland to experience the happy ending LaMotte and Ash were tragically deprived of.
Before discovering and being possessed by the romance of the nineteenth century poets, both Maud and Roland were living isolated and sterile lives. Maud, for instance, always has her hair covered in a kind of “head-binding” and twisted in tightly regimented braids that cause Roland “a kind of sympathetic pain on his own skull-skin” (282). She later reveals that she has to cover it because her fellow feminists always assume she’s “dyeing it to please men.” “It’s exhausting,” Roland has just said. “When everything’s a deliberate political stance. Even if it’s interesting” (295). Maud’s bound head thus serves as a symbol (if read in precisely the type of way Byatt’s story implicitly admonishes her audience to avoid) of the burdensome and even oppressive nature of an ideology that supposedly works for the liberation and wider consciousness of women.
Meanwhile, Roland is troubling himself about the implications of his budding romantic feelings for Maud. He has what he calls a “superstitious dread” of “repeating patterns,” a phrase he repeats over and over again throughout the novel. Thinking of his relations with Maud, he muses,
“Falling in love,” characteristically, combs the appearances of the world, and of the particular lover’s history, out of a random tangle and into a coherent plot. Roland was troubled that the opposite might be true. Finding themselves in a plot, they might suppose it appropriate to behave as though it was a sort of plot. And that would be to compromise some kind of integrity they had set out with. (456)
He later wrestles with the idea that “a Romance was one of the systems that controlled him, as the expectations of Romance control almost everyone in the Western world” (460). Because of his education, he cannot help doubting his own feelings, suspecting that giving in to their promptings would have political implications, and worrying that doing so would result in a comprising of his integrity (which he must likewise doubt) and his free will. Roland’s self-conscious lucubration forms a stark contrast to what Randolph Henry Ash wrote in an early letter to his wife Ellen: “I cannot get out of my mind—as indeed, how should I wish to, whose most ardent desire is to be possessed entirely by the pure thought of you—I cannot get out of my mind the entire picture of you” (500). It is only by reading letters like this, and by becoming more like Ash, turning away in the process from his modern learning, that Roland can come to an understanding of himself and accept his feelings for Maud as genuine and innocent.
Identity for modern literary scholars, Byatt suggests, is a fraught and complicated issue. At different points in the novel, both Maud and Roland engage in baroque, abortive efforts to arrive at a sense of who they are. Maud, reflecting on how another scholar’s writing about Ash says more about the author than about the subject, meditates,
Narcissism, the unstable self, the fractured ego, Maud thought, who am I? A matrix for a susurration of texts and codes? It was both a pleasant and an unpleasant idea, this requirement that she think of herself as intermittent and partial. There was the question of the awkward body. The skin, the breath, the eyes, the hair, their history, which did seem to exist. (273)
Roland later echoes this head-binding poststructuralist notion of the self as he continues to dither over whether or not he should act on his feelings for Maud.
Roland had learned to see himself, theoretically, as a crossing-place for a number of systems, all loosely connected. He had been trained to see his idea of his “self” as an illusion, to be replaced by a discontinuous machinery and electrical message-network of various desires, ideological beliefs and responses, language forms and hormones and pheromones. Mostly he liked this. He had no desire for any strenuous Romantic self-assertion. (459)
But he mistakes that lack of desire for self-assertion as genuine, when it fact it is borne of his theory-induced self-doubt. He will have to discover in himself that very desire to assert or express himself if he wants to escape his lifeless, menial occupation and end his sexless isolation. He and Maud both have to learn how to integrate their bodies and their desires into their conceptions of themselves.
Unfortunately, thinking about sex is even more fraught with exhausting political implications for Byatt’s scholars than thinking about the self. While on a trek to retrace the steps they believe LaMotte and Ash took in the hills of Yorkshire, Roland considers the writing of a psychoanalytic theorist. Disturbed, he asks Maud, “Do you never have the sense that our metaphors eat up our world?” (275). He goes on to explain, that no matter what they tried to discuss,
It all reduced like boiling jam to—human sexuality… And then, really, what is it, what is this arcane power we have, when we see everything is human sexuality? It’s really powerlessness… We are so knowing… Everything relates to us and so we’re imprisoned in ourselves—we can’t see things. (276)
The couple is coming to realize that they can in fact see things, the same things that the couple whose story they're tracking down saw over a century ago. This budding realization inspires in Roland an awareness of how limiting, even incapacitating, the dubious ideas of critical theorizing can be. Through the distorting prism of psychoanalysis, “Sexuality was like thick smoked glass; everything took on the same blurred tint through it. He could not imagine a pool with stones and water” (278).
The irony is that for all the faux sophistication of psychoanalytic sexual terminology it engenders in both Roland and Maud nothing but bafflement and aversion to actual sex. Roland highlights this paradox later, thinking,
They were children of a time and culture that mistrusted love, “in love,” romantic love, romance in toto, and which nevertheless in revenge proliferated sexual language, linguistic sexuality, analysis, dissection, deconstruction, exposure. (458)
Maud sums up the central problem when she says to Roland, “And desire, that we look into so carefully—I think all the looking-into has some very odd effects on the desire” (290). In that same scene, while still in Yorkshire trying to find evidence of LaMotte’s having accompanied Ash on his trip, the two modern scholars discover they share a fantasy, not a sexual fantasy, but one involving “An empty clean bed,” “An empty bed in an empty room,” and they wonder if “they’re symptomatic of whole flocks of exhausted scholars and theorists” (290-1).
Guided by their intense desire to be possessed by the two poets of the previous century, Maud and Roland try to imagine how they would have seen the world, and in so doing they try to imagine what it would be like not to believe in the poststructuralist and psychoanalytic theories they’ve been inculcated with. At first Maud tells Roland, “We live in the truth of what Freud discovered. Whether or not we like it. However we’ve modified it. We aren’t really free to suppose—to imagine—he could possibly have been wrong about human nature” (276). But after they’ve discovered a cave with a pool whose reflected light looks like white fire, a metaphor that both LaMotte and Ash used in poems written around the time they would’ve come to that very place, prompting Maud to proclaim, “She saw this. I’m sure she saw this” (289), the two begin trying in earnest to imagine what it would be like to live without their theories. Maud explains to Roland,
We know all sorts of things, too—about how there isn’t a unitary ego—how we’re made up of conflicting, interacting systems of things—and I suppose we believe that? We know we’re driven by desire, but we can’t see it as they did, can we? We never say the word Love, do we—we know it’s a suspect ideological construct—especially Romantic Love—so we have to make a real effort of imagination to know what it felt like to be them, here, believing in these things—Love—themselves—that what they did mattered—(290)
Though many critics have pointed out how the affair between LaMotte and Ash parallels the one between Maud and Roland, in some way the trajectories of the two relationships run in opposite directions. For instance, LaMotte leaves Ash as even more of a “chilly mortal” (310) than she was when she first met him. It turns out the term derives from a Mrs. Cammish, who lodged LaMotte and Ash while they were on their trip, and was handed down to the Lady Bailey, Maud’s relative, who applies it to her in a conversation with Roland. And whereas the ultimate falling out between LaMotte and Ash comes in the wake of Ash exposing a spiritualist, whose ideas and abilities LaMotte had invested a great deal of faith in, as a fraud, Roland’s counterpart disillusionment, his epiphany that literary theory as he has learned it is a fraud, is what finally makes the consummation of his relationship with Maud possible. Maud too has to overcome, to a degree, her feminist compunctions to be with Roland. Noting how this chilly mortal is warming over the course of their quest, Roland thinks how, “It was odd to hear Maud Bailey talking wildly of madness and bliss” (360). But at last she lets her hair down.
Sabine’s journal of the time her cousin Christabel stayed with her and her father on the Brittany coast, where she’d sought refuge after discovering she was pregnant, offers Roland and Maud a glimpse at how wrongheaded it can be to give precedence to their brand of critical reading over what they would consider a more primitive approach. Ironically, it is the young aspiring writer who gives them this glimpse as she chastises her high-minded poet cousin for her attempts to analyze and explain the meanings of the myths and stories she’s grown up with. “The stories come before the meanings,” Sabine insists to Christabel. “I do not believe all these explanations. They diminish. The idea of Woman is less than brilliant Vivien, and the idea of Merlin will not allegorise into male wisdom. He is Merlin” (384). These words come from the same young woman who LaMotte earlier credited for her persistence “in undermining facile illusions” (377).
Readers of Byatt’s novel, though not Maud and Roland, both of whom likely already know of the episode, learn about how Ash attended a séance and, reaching up to grab a supposedly levitating wreath, revealed it to be attached to a set of strings connected to the spiritualist. In a letter to Ruskin read for Byatt’s readers by another modern scholar, Ash expresses his outrage that someone would exploit the credulity and longing of the bereaved, especially mothers who’ve lost children. “If this is fraud, playing on a mother’s harrowed feelings, it is wickedness indeed” (423). He also wonders what the ultimate benefit would be if spiritualist studies into other realms proved to be valid. “But if it were so, if the departed spirits were called back—what good does it do? Were we meant to spend our days sitting and peering into the edge of the shadows?” (422). LaMotte and Ash part ways for good after his exposure of the spiritualist as a charlatan because she is so disturbed by the revelation. And, for the reader, the interlude serves as a reminder of past follies that today are widely acknowledged to have depended on trickery and impassioned credulity. So it might be for the ideas of Freud and Derrida and Lacan.
Roland arrives at the conclusion that this is indeed the case. Having been taught that language is inadequate and only speaks itself, he gradually comes to realize that this idea is nonsense. Reflecting on how he was taught that language couldn’t speak about what really existed in the world, he suddenly realizes that he’s been disabused of the idea. “What happened to him was that the ways in which it could be said had become more interesting than the idea that it could not” (513). He has learned through his quest to discover what had occurred between LaMotte and Ash that “It is possible for a writer to make, or remake at least, for a reader, the primary pleasures of eating, or drinking, or looking on, or sex.” People’s minds do in fact “hook together,” as he’d observed earlier, and they do it through language. The novel’s narrator intrudes to explain here near the end of the book what Roland is coming to understand.
Now and then there are readings that make the hairs on the neck, the non-existent pelt, stand on end and tremble, when every word burns and shines hard and clear and infinite and exact, like stones of fire, like points of stars in the dark—readings when the knowledge that we shall know the writing differently or better or satisfactorily, runs ahead of any capacity to say what we know, or how. In these readings, a sense of that text has appeared to be wholly new, never before seen, is followed, almost immediately, by the sense that it was always there, that we the readers, knew it was always there, and have always known it was as it was, though we have now for the first time recognised, become fully cognisant of, our knowledge. (512) (Neuroscientists agree.)
The recognition the narrator refers to—which Roland is presumably experiencing in the scene—is of a shared human nature, and shared human experience, the notions of which are considered by most literary critics to be politically reactionary.
Though he earlier claimed to have no desire to assert himself, Roland discovers he has a desire to write poetry. He decides to turn away from literary scholarship altogether and become a poet. He also asserts himself by finally taking charge and initiating sex with Maud.
And very slowly and with infinite gentle delays and delicate diversions and variations of indirect assault Roland finally, to use an outdated phrase, entered and took possession of all her white coolness that grew warm against him, so that there seemed to be no boundaries, and he heard, towards dawn, from a long way off, her clear voice crying out, uninhibited, unashamed, in pleasure and triumph. (551)
This is in fact, except for postscript focusing on Ash, the final scene of the novel, and it represents Roland’s total, and Maud’s partial transcendence of the theories and habits that hitherto made their lives so barren and lonely.
Read part 2
Related posts:
Read
POSTSTRUCTURALISM: BANAL WHEN IT'S NOT BUSY BEING ABSURD
Read
CAN’T WIN FOR LOSING: WHY THERE ARE SO MANY LOSERS IN LITERATURE AND WHY IT HAS TO CHANGE
Or
SABBATH SAYS: PHILIP ROTH AND THE DILEMMAS OF IDEOLOGICAL CASTRATION
Useless Art and Beautiful Minds
The paradox of art is that the artist conveys more of him or herself by focusing on the subject of the work. The artists who cast the most powerful spells are the ones who can get the most caught up in things other than themselves. Falling in love exposes as much of the lover as it does of the one who’s loved.
When you experience a work of art you can’t help imagining the mind behind it. Many people go so far as to imagine the events of the natural world as attempts by incorporeal minds to communicate their intentions. Creating art demands effort and clear intentionality, and so we automatically try to understand the creator’s message. Ghanaian artist El Anatsui says the difference between the tools and consumer artifacts that go into his creations and the creations themselves is that you are meant to use bottle caps and can lids, but you are meant to contemplate works of art.
Ai Weiwei’s marble sculpture of a surveillance camera, for instance, takes its form from an object that has a clear function and transforms it into an object that stands inert, useless but for the irresistible compulsion it evokes to ponder what it means to live under the watchful gaze of an oppressive state. Mastering the challenge posed by his tormenters, taking their tools and turning them into objects of beauty and contemplation, is an obvious intention and thus an obvious message. We look at the sculpture and we feel we understand what Ai Weiwei meant in creating it.
Not all art is conducive to such ease of recognition, and sometimes unsettledness of meaning is its own meaning. We are given to classifying objects or images, primarily by their use. Asking the question, what is this, is usually the same as asking, what is it for? If we see an image surrounded by a frame hanging on a wall, even the least artistically inclined of us will assume the picture in some way pleases the man or woman who put it there. It could be a picture of a loved one. It could be an image whose symmetry and colors and complexity strike most people as beautiful. It could signify some aspect of group identity.
Not all art pleases, and sometimes the artist’s intention is to disturb. John Keats believed what he called negative capability, a state in which someone “is capable of being in uncertainties, mysteries, doubts, without any irritable reaching after fact and reason,” to be central to the creation and appreciation of art. If everything we encounter fits neatly into our long-established categories, we will never experience such uncertainty. The temptation is always to avoid challenges to what we know because being so challenged can be profoundly discomfiting. But if our minds are never challenged they atrophy.
While artists often challenge us to contemplate topics like the slave trade for the manufacture of beer, the relation between industrial manufacturing and the natural world, or surveillance and freedom of expression, the mystery that lies at the foundation of art is the mind of the artist. Once we realize we’re to have an experience of art, we stop wondering, what is this for, and begin pondering what it means.
Art isn’t however simply an expression of the artist’s thoughts or emotions, and neither should it be considered merely an attempt at rendering some aspect of the real world through one of the representative media. How Anatsui conveys his message about the slave trade is just as important as any attempt to decipher what that message might be. The paradox of art is that the artist conveys more of him or herself by focusing on the subject of the work. The artists who cast the most powerful spells are the ones who can get the most caught up in things other than themselves. Falling in love exposes as much of the lover as it does of the one who’s loved.
Music and narrative arts rely on the dimension of time, so they illustrate the point more effectively. The pace of the rhythms and the pitch of voices and instruments convey emotion with immediacy and force. Musicians must to some degree experience the emotions they hope to spread through their performances (though they may begin in tranquility and only succumb afterward, affected by their own performance). They are like actors. But audiences do not assume that the man who plays low-pitched, violent music is angry when the song is over. Nor do they assume the singer who croons a plangent love song is at that moment in her life in the throes of an infatuation. The musicians throw themselves into their performances, and perhaps into the writing and composing of the songs, and, to the extent that we forget we’re listening to a performer as we feel or relive the anger or the pangs of love their music immerses us in, they achieve a transcendence we recognize as an experience of true art. We in the audience attribute that transcendence to the musicians, and infer that even though they may not currently be in the state their song inspired they must know a great deal about it.
Likewise a fiction writer accomplishes the most by betraying little or no interest in him or herself. Line by line, scene by scene, if the reader is thinking of the author and not the characters the work is a failure. When the story ceases to be a story and the fates of characters become matters of real concern for the reader, the author has achieved that same artistic transcendence as the musician whose songs take hold of our hearts and make us want to rage, to cry, to dance. But, finishing the chapter, leaving the company of the characters, reemerging from the story, we can marvel at the seeming magic that so consumed us. Contemplating the ultimate outcomes as the unfolding of the plot comes to an end, we’re free to treat the work holistically and see in it the vision of the writer.
The presence of the artist’s mind need not distract from the subject we are being asked to contemplate. But all art can be thought of as an exercise in empathy. More than that, though, the making strange of familiar objects and experiences, the communion of minds assumed to be separated by some hitherto unbridgeable divide, these experiences inspire an approach to living and socializing that is an art in its own right. Sometimes to see something clearly we have to be able to imagine it otherwise. To really hear someone else we have to appreciate ourselves. To break free of our own habits, it helps to know how others think and live.
Also read:
WHAT IS A STORY? AND WHAT ARE YOU SUPPOSED TO DO WITH ONE?
The Enlightened Hypocrisy of Jonathan Haidt's Righteous Mind
Jonathan Haidt extends an olive branch to conservatives by acknowledging their morality has more dimensions than the morality of liberals. But is he mistaking what’s intuitive for what’s right? A critical, yet admiring review of The Righteous Mind.
A Review of Jonathan Haidt's new book,
The Righteous Mind: Why Good People are Divided by Politics and Religion
Back in the early 1950s, Muzafer Sherif and his colleagues conducted a now-infamous experiment that validated the central premise of Lord of the Flies. Two groups of 12-year-old boys were brought to a camp called Robber’s Cave in southern Oklahoma where they were observed by researchers as the members got to know each other. Each group, unaware at first of the other’s presence at the camp, spontaneously formed a hierarchy, and they each came up with a name for themselves, the Eagles and the Rattlers. That was the first stage of the study. In the second stage, the two groups were gradually made aware of each other’s presence, and then they were pitted against each other in several games like baseball and tug-o-war. The goal was to find out if animosity would emerge between the groups. This phase of the study had to be brought to an end after the groups began staging armed raids on each other’s territory, wielding socks they’d filled with rocks. Prepubescent boys, this and several other studies confirm, tend to be highly tribal.
So do conservatives.
This is what University of Virginia psychologist Jonathan Haidt heroically avoids saying explicitly for the entirety of his new 318-page, heavily endnoted The Righteous Mind: Why Good People Are Divided by Politics and Religion. In the first of three parts, he takes on ethicists like John Stuart Mill and Immanuel Kant, along with the so-called New Atheists like Sam Harris and Richard Dawkins, because, as he says in a characteristically self-undermining pronouncement, “Anyone who values truth should stop worshipping reason” (89). Intuition, Haidt insists, is more worthy of focus. In part two, he lays out evidence from his own research showing that all over the world judgments about behaviors rely on a total of six intuitive dimensions, all of which served some ancestral, adaptive function. Conservatives live in “moral matrices” that incorporate all six, while liberal morality rests disproportionally on just three. At times, Haidt intimates that more dimensions is better, but then he explicitly disavows that position. He is, after all, a liberal himself. In part three, he covers some of the most fascinating research to emerge from the field of human evolutionary anthropology over the past decade and a half, concluding that tribalism emerged from group selection and that without it humans never would have become, well, human. Again, the point is that tribal morality—i.e. conservatism—cannot be all bad.
One of Haidt’s goals in writing The Righteous Mind, though, was to improve understanding on each side of the central political divide by exploring, and even encouraging an appreciation for, the moral psychology of those on the rival side. Tribalism can’t be all bad—and yet we need much less of it in the form of partisanship. “My hope,” Haidt writes in the introduction, “is that this book will make conversations about morality, politics, and religion more common, more civil, and more fun, even in mixed company” (xii). Later he identifies the crux of his challenge, “Empathy is an antidote to righteousness, although it’s very difficult to empathize across a moral divide” (49). There are plenty of books by conservative authors which gleefully point out the contradictions and errors in the thinking of naïve liberals, and there are plenty by liberals returning the favor. What Haidt attempts is a willful disregard of his own politics for the sake of transcending the entrenched divisions, even as he’s covering some key evidence that forms the basis of his beliefs. Not surprisingly, he gives the impression at several points throughout the book that he’s either withholding the conclusions he really draws from the research or exercising great discipline in directing his conclusions along paths amenable to his agenda of bringing about greater civility.
Haidt’s focus is on intuition, so he faces the same challenge Daniel Kahneman did in writing Thinking, Fast and Slow: how to convey all these different theories and findings in a book people will enjoy reading from first page to last? Kahneman’s attempt was unsuccessful, but his encyclopedic book is still readable because its topic is so compelling. Haidt’s approach is to discuss the science in the context of his own story of intellectual development. The product reads like a postmodern hero’s journey in which the unreliable narrator returns right back to where he started, but with a heightened awareness of how small his neighborhood really is. It’s a riveting trip down the rabbit hole of self-reflection where the distinction between is and ought gets blurred and erased and reinstated, as do the distinctions between intuition and reason, and even self and other. Since, as Haidt reports, liberals tend to score higher on the personality trait called openness to new ideas and experiences, he seems to have decided on a strategy of uncritically adopting several points of conservative rhetoric—like suggesting liberals are out-of-touch with most normal people—in order to subtly encourage less open members of his audience to read all the way through. Who, after all, wants to read a book by a liberal scientist pointing out all the ways conservatives go wrong in their thinking?
The Elephant in the Room
Haidt’s first move is to challenge the primacy of thinking over intuiting. If you’ve ever debated someone into a corner, you know simply demolishing the reasons behind a position will pretty much never be enough to change anyone’s mind. Citing psychologist Tom Gilovich, Haidt explains that when we want to believe something, we ask ourselves, “Can I believe it?” We begin a search, “and if we find even a single piece of pseudo-evidence, we can stop thinking. We now have permission to believe. We have justification, in case anyone asks.” But if we don’t like the implications of, say, global warming, or the beneficial outcomes associated with free markets, we ask a different question: when we don’t want to believe something, we ask ourselves, “Must I believe it?” Then we search for contrary evidence, and if we find a single reason to doubt the claim, we can dismiss it. You only need one key to unlock the handcuffs of must. Psychologists now have file cabinets full of findings on “motivated reasoning,” showing the many tricks people use to reach the conclusions they want to reach. (84)
Haidt’s early research was designed to force people into making weak moral arguments so that he could explore the intuitive foundations of judgments of right and wrong. When presented with stories involving incest, or eating the family dog, which in every case were carefully worded to make it clear no harm would result to anyone—the incest couldn’t result in pregnancy; the dog was already dead—“subjects tried to invent victims” (24). It was clear that they wanted there to be a logical case based on somebody getting hurt so they could justify their intuitive answer that a wrong had been done.
They said things like ‘I know it’s wrong, but I just can’t think of a reason why.’ They seemed morally dumbfounded—rendered speechless by their inability to explain verbally what they knew intuitively. These subjects were reasoning. They were working quite hard reasoning. But it was not reasoning in search of truth; it was reasoning in support of their emotional reactions. (25)
Reading this section, you get the sense that people come to their beliefs about the world and how to behave in it by asking the same three questions they’d ask before deciding on a t-shirt: how does it feel, how much does it cost, and how does it make me look? Quoting political scientist Don Kinder, Haidt writes, “Political opinions function as ‘badges of social membership.’ They’re like the array of bumper stickers people put on their cars showing the political causes, universities, and sports teams they support” (86)—or like the skinny jeans showing everybody how hip you are.
Kahneman uses the metaphor of two systems to explain the workings of the mind. System 1, intuition, does most of the work most of the time. System 2 takes a lot more effort to engage and can never manage to operate independently of intuition. Kahneman therefore proposes educating your friends about the common intuitive mistakes—because you’ll never recognize them yourself. Haidt uses the metaphor of an intuitive elephant and a cerebrating rider. He first used this image for an earlier book on happiness, so the use of the GOP mascot was accidental. But because of the more intuitive nature of conservative beliefs it’s appropriate. Far from saying that republicans need to think more, though, Haidt emphasizes the point that rational thought is never really rational and never anything but self-interested. He argues,
the rider acts as the spokesman for the elephant, even though it doesn’t necessarily know what the elephant is really thinking. The rider is skilled at fabricating post hoc explanations for whatever the elephant has just done, and it is good at finding reasons to justify whatever the elephant wants to do next. Once human beings developed language and began to use it to gossip about each other, it became extremely valuable for elephants to carry around on their backs a full-time public relations firm. (46)
The futility of trying to avoid motivated reasoning provides Haidt some justification of his own to engage in what can only be called pandering. He cites cultural psychologists Joe Henrich, Steve Heine, and Ara Noenzayan, who argued in their 2010 paper “The Weirdest People in the World?”that researchers need to do more studies with culturally diverse subjects. Haidt commandeers the acronym WEIRD—western, educated, industrial, rich, and democratic—and applies it somewhat derisively for most of his book, even though it applies both to him and to his scientific endeavors. Of course, he can’t argue that what’s popular is necessarily better. But he manages to convey that attitude implicitly, even though he can’t really share the attitude himself.
Haidt is at his best when he’s synthesizing research findings into a holistic vision of human moral nature; he’s at his worst, his cringe-inducing worst, when he tries to be polemical. He succumbs to his most embarrassingly hypocritical impulses in what are transparently intended to be concessions to the religious and the conservative. WEIRD people are more apt to deny their intuitive, judgmental impulses—except where harm or oppression are involved—and insist on the fair application of governing principles derived from reasoned analysis. But apparently there’s something wrong with this approach:
Western philosophy has been worshipping reason and distrusting the passions for thousands of years. There’s a direct line running from Plato through Immanuel Kant to Lawrence Kohlberg. I’ll refer to this worshipful attitude throughout this book as the rationalist delusion. I call it a delusion because when a group of people make something sacred, the members of the cult lose the ability to think clearly about it. (28)
This is disingenuous. For one thing, he doesn’t refer to the rationalist delusion throughout the book; it only shows up one other time. Both instances implicate the New Atheists. Haidt coins the term rationalist delusion in response to Dawkins’s The God Delusion. An atheist himself, Haidt is throwing believers a bone. To make this concession, though, he’s forced to seriously muddle his argument. “I’m not saying,” he insists,
we should all stop reasoning and go with our gut feelings. Gut feelings are sometimes better guides than reasoning for making consumer choices and interpersonal judgments, but they are often disastrous as a basis for public policy, science, and law. Rather, what I’m saying is that we must be wary of any individual’s ability to reason. We should see each individual as being limited, like a neuron. (90)
As far as I know, neither Harris nor Dawkins has ever declared himself dictator of reason—nor, for that matter, did Mill or Rawls (Hitchens might have). Haidt, in his concessions, is guilty of making points against arguments that were never made. He goes on to make a point similar to Kahneman’s.
We should not expect individuals to produce good, open-minded, truth-seeking reasoning, particularly when self-interest or reputational concerns are in play. But if you put individuals together in the right way, such that some individuals can use their reasoning powers to disconfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system. (90)
What Haidt probably realizes but isn’t saying is that the environment he’s describing is a lot like scientific institutions in academia. In other words, if you hang out in them, you’ll be WEIRD.
A Taste for Self-Righteousness
The divide over morality can largely be reduced to the differences between the urban educated and the poor not-so-educated. As Haidt says of his research in South America, “I had flown five thousand miles south to search for moral variation when in fact there was more to be found a few blocks west of campus, in the poor neighborhood surrounding my university” (22). One of the major differences he and his research assistants serendipitously discovered was that educated people think it’s normal to discuss the underlying reasons for moral judgments while everyone else in the world—who isn’t WEIRD—thinks it’s odd:
But what I didn’t expect was that these working-class subjects would sometimes find my request for justifications so perplexing. Each time someone said that the people in a story had done something wrong, I asked, “Can you tell me why that was wrong?” When I had interviewed college students on the Penn campus a month earlier, this question brought forth their moral justifications quite smoothly. But a few blocks west, this same question often led to long pauses and disbelieving stares. Those pauses and stares seemed to say,
You mean you don’t know why it’s wrong to do that to a chicken? I have to explain it to you? What planet are you from? (95)
The Penn students “were unique in their unwavering devotion to the ‘harm principle,’” Mill’s dictum that laws are only justified when they prevent harm to citizens. Haidt quotes one of the students as saying, “It’s his chicken, he’s eating it, nobody is getting hurt” (96). (You don’t want to know what he did before cooking it.)
Having spent a little bit of time with working-class people, I can make a point that Haidt overlooks: they weren’t just looking at him as if he were an alien—they were judging him. In their minds, he was wrong just to ask the question. The really odd thing is that even though Haidt is the one asking the questions he seems at points throughout The Righteous Mind to agree that we shouldn’t ask questions like that:
There’s more to morality than harm and fairness. I’m going to try to convince you that this principle is true descriptively—that is, as a portrait of the moralities we see when we look around the world. I’ll set aside the question of whether any of these alternative moralities are really good, true, or justifiable. As an intuitionist, I believe it is a mistake to even raise that emotionally powerful question until we’ve calmed our elephants and cultivated some understanding of what such moralities are trying to accomplish. It’s just too easy for our riders to build a case against every morality, political party, and religion that we don’t like. So let’s try to understand moral diversity first, before we judge other moralities. (98)
But he’s already been busy judging people who base their morality on reason, taking them to task for worshipping it. And while he’s expending so much effort to hold back his own judgments he’s being judged by those whose rival conceptions he’s trying to understand. His open-mindedness and disciplined restraint are as quintessentially liberal as they are unilateral.
In the book’s first section, Haidt recounts his education and his early research into moral intuition. The second section is the story of how he developed his Moral Foundations Theory. It begins with his voyage to Bhubaneswar, the capital of Orissa in India. He went to conduct experiments similar to those he’d already been doing in the Americas. “But these experiments,” he writes, “taught me little in comparison to what I learned just from stumbling around the complex social web of a small Indian city and then talking with my hosts and advisors about my confusion.” It was an earlier account of this sojourn Haidt had written for the online salon The Edge that first piqued my interest in his work and his writing. In both, he talks about his two “incompatible identities.”
On one hand, I was a twenty-nine-year-old liberal atheist with very definite views about right and wrong. On the other hand, I wanted to be like those open-minded anthropologists I had read so much about and had studied with. (101)
The people he meets in India are similar in many ways to American conservatives. “I was immersed,” Haidt writes, “in a sex-segregated, hierarchically stratified, devoutly religious society, and I was committed to understanding it on its own terms, not on mine” (102). The conversion to what he calls pluralism doesn’t lead to any realignment of his politics. But supposedly for the first time he begins to feel and experience the appeal of other types of moral thinking. He could see why protecting physical purity might be fulfilling. This is part of what's known as the “ethic of divinity,” and it was missing from his earlier way of thinking. He also began to appreciate certain aspects of the social order, not to the point of advocating hierarchy or rigid sex roles but seeing value in the complex network of interdependence.
The story is thoroughly engrossing, so engrossing that you want it to build up into a life-changing insight that resolves the crisis. That’s where the six moral dimensions come in (though he begins with just five and only adds the last one much later), which he compares to the different dimensions of taste that make up our flavor palette. The two that everyone shares but that liberals give priority to whenever any two or more suggest different responses are Care and Harm—hurting people is wrong and we should help those in need—and Fairness. The other three from the original set are Loyalty, Authority, and Sanctity, loyalty to the tribe, respect for the hierarchy, and recognition of the sacredness of the tribe’s symbols, like the flag. Libertarians are closer to liberals; they just rely less on the Care dimension and much more on the recently added sixth one, Liberty from Opression, which Haidt believes evolved in the context of ancestral egalitarianism similar to that found among modern nomadic foragers. Haidt suggests that restricting yourself to one or two dimensions is like swearing off every flavor but sweet and salty, saying,
many authors reduce morality to a single principle, usually some variant of welfare maximization (basically, help people, don’t hurt them). Or sometimes it’s justice or related notions of fairness, rights, or respect for individuals and their autonomy. There’s The Utilitarian Grill, serving only sweeteners (welfare), and The Deontological Diner, serving only salts (rights). Those are your options. (113)
Haidt doesn’t make the connection between tribalism and the conservative moral trifecta explicit. And he insists he’s not relying on what’s called the Naturalistic Fallacy—reasoning that what’s natural must be right. Rather, he’s being, he claims, strictly descriptive and scientific.
Moral judgment is a kind of perception, and moral science should begin with a careful study of the moral taste receptors. You can’t possibly deduce the list of five taste receptors by pure reasoning, nor should you search for it in scripture. There’s nothing transcendental about them. You’ve got to examine tongues. (115)
But if he really were restricting himself to description he would have no beef with the utilitarian ethicists like Mill, the deontological ones like Kant, or for that matter with the New Atheists, all of whom are operating in the realm of how we should behave and what we should believe as opposed to how we’re naturally, intuitively primed to behave and believe. At one point, he goes so far as to present a case for Kant and Jeremy Bentham, father of utilitarianism, being autistic (the trendy psychological diagnosis du jour) (120). But, like a lawyer who throws out a damning but inadmissible comment only to say “withdrawn” when the defense objects, he assures us that he doesn’t mean the autism thing as an ad hominem.
I think most of my fellow liberals are going to think Haidt’s metaphor needs some adjusting. Humans evolved a craving for sweets because in our ancestral environment fruits were a rare but nutrient-rich delicacy. Likewise, our taste for salt used to be adaptive. But in the modern world our appetites for sugar and salt have created a health crisis. These taste receptors are also easy for industrial food manufacturers to exploit in a way that enriches them and harms us. As Haidt goes on to explain in the third section, our tribal intuitions were what allowed us to flourish as a species. But what he doesn’t realize or won’t openly admit is that in the modern world tribalism is dangerous and far too easily exploited by demagogues and PR experts.
In his story about his time in India, he makes it seem like a whole new world of experiences was opened to him. But this is absurd (and insulting). Liberals experience the sacred too; they just don’t attempt to legislate it. Liberals recognize intuitions pushing them toward dominance and submission. They have feelings of animosity toward outgroups and intense loyalty toward members of their ingroup. Sometimes, they even indulge these intuitions and impulses. The distinction is not that liberals don’t experience such feelings; they simply believe they should question whether acting on them is appropriate in the given context. Loyalty in a friendship or a marriage is moral and essential; loyalty in business, in the form of cronyism, is profoundly immoral. Liberals believe they shouldn’t apply their personal feelings about loyalty or sacredness to their judgments of others because it’s wrong to try to legislate your personal intuitions, or even the intuitions you share with a group whose beliefs may not be shared in other sectors of society. In fact, the need to consider diverse beliefs—the pluralism that Haidt extolls—is precisely the impetus behind the efforts ethicists make to pare down the list of moral considerations.
Moral intuitions, like food cravings, can be seen as temptations requiring discipline to resist. It’s probably no coincidence that the obesity epidemic tracks the moral divide Haidt found when he left the Penn campus. As I read Haidt’s account of Drew Westen’s fMRI experiments with political partisans, I got a bit anxious because I worried a scan might reveal me to be something other than what I consider myself. The machine in this case is a bit like the Sorting Hat at Hogwarts, and I hoped, like Harry Potter, not to be placed in Slytherin. But this hope, even if it stems from my wish to identify with the group of liberals I admire and feel loyalty toward, cannot be as meaningless as Haidt’s “intuitionism” posits.
Ultimately, the findings Haidt brings together under the rubric of Moral Foundations Theory don’t lend themselves in any way to his larger program of bringing about greater understanding and greater civility. He fails to understand that liberals appreciate all the moral dimensions but don’t think they should all be seen as guides to political policies. And while he may want there to be less tribalism in politics he has to realize that most conservatives believe tribalism is politics—and should be.
Resistance to the Hive Switch is Futile
“We are not saints,” Haidt writes in the third section, “but we are sometimes good team players” (191). Though his efforts to use Moral Foundations to understand and appreciate conservatives lead to some bizarre contortions and a profound misunderstanding of liberals, his synthesis of research on moral intuitions with research and theorizing on multi-level selection, including selection at the level of the group, is an important contribution to psychology and anthropology. He writes that
anytime a group finds a way to suppress selfishness, it changes the balance of forces in a multi-level analysis: individual-level selection becomes less important, and group-level selection becomes more powerful. For example, if there is a genetic basis for feelings of loyalty and sanctity (i.e., the Loyalty and Sanctity Foundations), then intense intergroup competition will make these genes become more common in the next generation. (194)
The most interesting idea in this section is that humans possess what Haidt calls a “hive switch” that gets flipped whenever we engage in coordinated group activities. He cites historian William McNeil who recalls an “altered state of consciousness” when he was marching in formation with fellow soldiers in his army days. He describes it as a “sense of pervasive well-being…a strange sense of personal enlargement; a sort of swelling out, becoming bigger than life” (221). Sociologist Emile Durkheim referred to this same experience as “collective effervescence.” People feel it today at football games, at concerts as they dance to a unifying beat, and during religious rituals. It’s a profoundly spiritual experience, and it likely evolved to create a greater sense of social cohesion within groups competing with other groups.
Surprisingly, the altruism inspired by this sense of the sacred triggered by coordinated activity, though primarily directed at fellow group members—parochial altruism—can also flow out in ways that aren’t entirely tribal.
Haidt cites political scientists Robert Putnam and David Campbell’s book, American Grace: How Religion Divides and Unites Us, where they report the finding that “the more frequently people attend religious services, the more generous and charitable they become across the board” (267); they do give more to religious charities, but they also give more to secular ones. Putnam and Campbell write that “religiously observant Americans are better neighbors and better citizens.” The really astonishing finding from Putnam and Campbell’s research, though, is that the social advantages enjoyed by religious people had nothing to do with the actual religious beliefs. Haidt explains,
These beliefs and practices turned out to matter very little. Whether you believe in hell, whether you pray daily, whether you are a Catholic, Protestant, Jew, or Mormon… none of these things correlated with generosity. The only thing that was reliably and powerfully associated with the moral benefits of religion was how enmeshed people were in relationships with their co-religionists. It’s the friendships and group activities, carried out within a moral matrix that emphasizes selflessness. That’s what brings out the best in people. (267)
The Sacred foundation, then, is an integral aspect of our sense of community, as well as a powerful inspiration for altruism. Haidt cites the work of Richard Sosis, who combed through all the records he could find on communes in America. His central finding is that “just 6 percent of the secular communes were still functioning twenty years after their founding, compared to 39 percent of the religious communes.” Socis went on to identify “one master variable” which accounted for the difference between success and failure for religious groups: “the number of costly sacrifices that each commune demanded from its members” (257). But sacrifices demanded by secular groups made no difference whatsoever. Haidt concludes,
In other words, the very ritual practices that the New Atheists dismiss as costly, inefficient, and irrational turn out to be a solution to one of the hardest problems humans face: cooperation without kinship. Irrational beliefs can sometimes help the group function more rationally, particularly when those beliefs rest upon the Sanctity foundation. Sacredness binds people together, and then blinds them to the arbitrariness of the practice. (257)
This section captures the best and the worst of Haidt's work. The idea that humans have an evolved sense of the sacred, and that it came about to help our ancestral groups cooperate and cohere—that’s a brilliant contribution to theories going back through D.S.Wilson, Emile Durkheim, all the way back to Darwin. Contemplating it sparks a sense of wonder that must emerge from that same evolved feeling for the sacred. But then he uses the insight in the service of a really lame argument.
The costs critics of religion point to aren’t the minor personal ones like giving up alcohol or fasting for a few days. Haidt compares studying the actual, “arbitrary” beliefs and practices of religious communities to observing the movements of a football for the purpose of trying to understand why people love watching games. It’s the coming together as a group, he suggests, the sharing of goals and mutual direction of attention, the feeling of shared triumph or even disappointment. But if the beliefs and rituals aren’t what’s important then there’s no reason they have to be arbitrary—and there’s no reason they should have to entail any degree of hostility toward outsiders. How then can Haidt condemn Harris and Dawkins for “worshipping reason” and celebrating the collective endeavor known as science? Why doesn’t he recognize that for highly educated people, especially scientists, discovery is sacred? He seriously mars his otherwise magnificent work by wrongly assuming anyone who doesn’t think flushing an American flag down the toilet is wrong has no sense of the sacred, shaking his finger at them, effectively saying, rallying around a cause is what being human is all about, but what you flag-flushers think is important just isn’t worthy—even though it’s exactly what I think is important too, what I’ve devoted my career and this book you're holding to anyway.
As Kahneman stresses in his book, resisting the pull of intuition takes a great deal of effort. The main difference between highly educated people and everyone else isn’t a matter of separate moral intuitions. It’s a different attitude toward intuitions in general. Those of us who worship reason believe in the Enlightenment ideals of scientific progress and universal human rights. I think most of us even feel those ideals are sacred and inviolable. But the Enlightenment is a victim of its own success. No one remembers the unchecked violence and injustice that were the norms before it came about—and still are the norms in many parts of the world. In some academic sectors, the Enlightenment is even blamed for some of the crimes its own principles are used to combat, like patriarchy and colonialism. Intuitions are still very much a part of human existence, even among those who are the most thoroughly steeped in Enlightenment values. But worshipping them is far more dangerous than worshipping reason. As the world becomes ever more complicated, nostalgia for simpler times becomes an ever more powerful temptation. And surmounting the pull of intuition may ultimately be an impossible goal. But it’s still a worthy, and even sacred ideal.
But if Haidt’s attempt to inspire understanding and appreciation misfires how are we to achieve the goal of greater civility and less partisanship? Haidt does offer some useful suggestions. Still, I worry that his injunction to “Talk to the elephant” will merely contribute to the growing sway of the burgeoning focus-groupocracy. Interestingly, the third stage of the Robber's Cave experiment may provide some guidance. Sherif and his colleagues did manage to curtail the escalating hostility between the Eagles and the Rattlers. And all it took was some shared goals they had to cooperate to achieve, like when their bus got stuck on the side of the road and all the boys in both groups had to work together to work it free. Maybe it’s time for a mission to Mars all Americans could support (credit Neil de Grasse Tyson). Unfortunately, the conservatives would probably never get behind it. Maybe we should do another of our liberal conspiracy hoaxes to convince them China is planning to build a military base on the Red Planet. Then we’ll be there in no time.
Also read
And:
And:
Who Needs Complex Narratives? : Tim Parks' Enlightened Cynicism
Identities can be burdensome, as Parks intimates his is when he reveals his story has brought him to a place where he’s making a living engaging in an activity that serves no need—and yet he can’t bring himself seek out other employment. In an earlier, equally fascinating post titled "The Writer's Job," Parks interprets T.S. Eliot's writing about writers as suggesting that "only those who had real personality, special people like himself, would appreciate what a burden personality was and wish to shed it."
One of my professors asked our class last week how many of us were interested in writing fiction of our own. She was trying to get us to consider the implications of using one strategy for telling a story based on your own life over another. But I was left thinking instead about the implications of nearly everyone in the room raising a hand. Is the audience for any aspiring author’s work composed exclusively of other aspiring authors? If so, does that mean literature is no more than a exclusive society of the published and consumed forever screening would-be initiates, forever dangling the prize of admission to their ranks, allowing only the elite to enter, and effectively sealed off from the world of the non-literary?
Most of our civilization has advanced beyond big books. People still love their stories, but everyone’s time is constrained, and the choices of entertainment are infinite. Reading The Marriage Plot is an extravagance. Reading Of Human Bondage, the book we’re discussing in my class, is only for students of college English and the middle-class white guys trying to impress them. Nevertheless, Jonathan Franzen, whose written two lengthy, too lengthy works of fiction that enjoy a wide readership, presumably made up primarily of literary aspirants like me (I read and enjoyed both), told an Italian interviewer that “There is an enormous need for long, elaborate, complex stories, such as can only be written by an author concentrating alone, free from the deafening chatter of Twitter.”
British author Tim Parks quotes Franzen in a provocative post at The New York Review of Books titled “Do We Need Stories?” Parks notes that “as a novelist it is convenient to think that by the nature of the job one is on the side of the good, supplying an urgent and general need.” Though he’s written some novels of his own, and translated several others from Italian to English, Parks suspects that Franzen is wrong, that as much as we literary folk may enjoy them, we don’t really need complex narratives. We should note that just as Franzen is arguing on behalf of his own vocation Parks is arguing against his, thus effecting a type of enlightened cynicism toward his own work and that of others in the same field. “Personally,” he says, “I fear I’m too enmired in narrative and self narrative to bail out now. I love an engaging novel, I love a complex novel; but I am quite sure I don’t need it.”
Parks’ argument is fascinating for what it reveals about what many fiction writers and aficionados believe they’re doing when they’re telling stories. It’s also fascinating for what it represents about authors and their attitudes toward writing. Parks rubs up against some profound insights, but then succumbs to some old-fashioned humanities nonsense. Recalling a time when he served as a judge for a literary award, Parks quotes the case made by a colleague on behalf of his or her favored work, which is excellent, it was insisted, “because it offers complex moral situations that help us get a sense of how to live and behave.” As life becomes increasingly complex, then, fraught with distractions like those incessant tweets, we need fictional accounts of complex moral dilemmas to help us train our minds to be equal to the task of living in the modern world. Parks points out two problems with this view: fiction isn’t the only source of stories, and behind all that complexity is the author’s take on the moral implications of the story’s events which readers must decide whether to accept or reject. We can’t escape complex moral dilemmas, so we may not really need any simulated training. And we have to pay attention lest we discover our coach has trained us improperly. The power of stories can, as Parks suggests, be “pernicious.” “In this view of things, rather than needing stories we need to learn how to smell out their drift and resist them.” (Yeah, but does anyone read Ayn Rand who isn't already convinced?)
But Parks doesn’t believe the true goal of either authors or readers is moral development or practical training. Instead, complex narratives give pleasure because they bolster our belief in complex selves. Words like God, angel, devil, and ghost, Parks contends, have to come with stories attached to them to be meaningful because they don’t refer to anything we can perceive. From this premise of one-word stories, he proceeds,
Arguably the most important word in the invented-referents category is “self.” We would like the self to exist perhaps, but does it really? What is it? The need to surround it with a lexical cluster of reinforcing terms—identity, character, personality, soul—all with equally dubious referents suggests our anxiety. The more words we invent, the more we feel reassured that there really is something there to refer to.
When my classmates and I raised our hands and acknowledged our shared desire to engage in the creative act of storytelling, what we were really doing, according to Parks, was expressing our belief in that fictional character we refer to reverentially as ourselves.
One of the accomplishments of the novel, which as we know blossomed with the consolidation of Western individualism, has been to reinforce this ingenious invention, to have us believe more and more strongly in this sovereign self whose essential identity remains unchanged by all vicissitudes. Telling the stories of various characters in relation to each other, how something started, how it developed, how it ended, novels are intimately involved with the way we make up ourselves. They reinforce a process we are engaged in every moment of the day, self creation. They sustain the idea of a self projected through time, a self eager to be a real something (even at the cost of great suffering) and not an illusion.
Parks is just as much a product of that “Western individualism” as the readers he’s trying to enlighten as to the fictional nature of their essential being. As with his attempt at undermining the ultimate need for his own profession, there’s a quality of self-immolation in this argument—except of course there’s nothing, really, to immolate.
What exactly, we may wonder, is doing the reading, is so desperate to believe in its own reality? And why is that belief in its own reality so powerful that this thing, whatever it may be, is willing to experience great suffering to reinforce it? Parks suggests the key to the self is some type of unchanging and original coherence. So we like stories because we like characters who are themselves coherent and clearly delineated from other coherent characters.
The more complex and historically dense the stories are, the stronger the impression they give of unique and protracted individual identity beneath surface transformations, conversions, dilemmas, aberrations. In this sense, even pessimistic novels—say, J.M. Coetzee’s Disgrace—can be encouraging: however hard circumstances may be, you do have a self, a personal story to shape and live. You are a unique something that can fight back against all the confusion around. You have pathos.
In this author’s argument for the superfluity of authors, the centrality of pain and suffering to the story of the self is important to note. He makes the point even more explicit, albeit inadvertently, when he says, “If we asked the question of, for example, a Buddhist priest, he or she would probably tell us that it is precisely this illusion of selfhood that makes so many in the West unhappy.”
I don’t pretend to have all the questions surrounding our human fascination with narrative—complex and otherwise—worked out, but I do know Parks’ premise is faulty.
Unlike many professional scholars in the Humanities, Parks acknowledges that at least some words can refer to things in the world. But he goes wrong when he assumes that if there exists no physical object to refer to the word must have a fictional story attached to it. There is good evidence, for instance, that our notions of God and devils and spirits are not in fact based on stories, though stories clearly color their meanings. Our interactions with invisible beings are based on the same cognitive mechanisms that help us interact with completely visible fellow humans. What psychologists call theory of mind, our reading of intentions and mental states into others, likely extends into realms where no mind exists to have intentions and states. That’s where our dualistic philosophy comes from.
While Parks is right in pointing out that the words God and self don’t have physical referents—though most of us, I assume, think of our bodies as ourselves to some degree—he’s completely wrong in inferring these words only work as fictional narratives. People assume, wrongly, that God is a real being because they have experiences with him. In the same way, the self isn’t an object but an experience—and a very real experience. (Does the word fun have to come with a story attached?) The consistency across time and circumstance, the sense of unified awareness, these are certainly exaggerated at times. So too is our sense of transformation though, as anyone knows who’s discovered old writings from an earlier stage of life and thought, “Wow, I was thinking about the same stuff back then as I am now—even my writing style is similar!”
Parks is wrong too about so-called Western society, as pretty much everyone who uses that term is. It’s true that some Asian societies have a more collectivist orientation, but I’ve heard rumors that a few Japanese people actually enjoy reading novels. (The professor of the Brittish Lit course I'm taking is Chinese.) Those Buddhists monks are deluded too. Ruut Veenhoven surveyed 43 nations in the early 1990s and discovered that as individualism increases, so too does happiness. “There is no pattern of diminishing returns,” Veenhoven writes. “This indicates that individualization has not yet passed its optimum.” What this means is that, assuming Parks is right in positing that novel-reading increases individualism, reading novels could make you happier. Unfortunately, a lot of high-brow, literary authors would bristle at this idea because it makes of their work less a heroic surveying of the abyss and more of a commodity.
Parks doesn’t see any meaningful distinction between self and identity, but psychologists would use the latter term to label his idea of a coherent self-story. Dan McAdams is the leading proponent of the idea that in addition to a unified and stable experience of ourselves we each carry with us a story whose central theme is our own uniqueness and how it developed. He writes in his book The Stories We Live By: Personal Myths and the Making of the Self that identity is “an inner story of the self that integrates the reconstructed past, perceived present, and anticipated future to provide a life with unity, purpose, and meaning.” But we don’t just tell these stories to ourselves, nor are we solely interested in our own story. One of the functions of identity is to make us seem compelling and attractive to other people. Parks, for instance, tells us the story of how he provides a service, writing and translating, he understands isn’t necessary to anyone. And, if you’re like me, at least for a moment, you’re impressed with his ability to shoulder the burden of this enlightened cynicism. He’s a bit like those Buddhist monks who go to such great lengths to eradicate their egos.
The insight that Parks never quite manages to arrive at is that suffering is integral to stories of the self. If my story of myself, my identity, doesn’t feature any loss or conflict, then it’s not going to be very compelling to anyone. But what’s really compelling are the identities which somehow manage to cause the self whose stories they are their own pain. Identities can be burdensome, as Parks intimates his is when he reveals his story has brought him to a place where he’s making a living engaging in an activity that serves no need—and yet he can’t bring himself seek out other employment. In an earlier, equally fascinating post titled "The Writer's Job," Parks interprets T.S. Eliot's writing about writers as suggesting that "only those who had real personality, special people like himself, would appreciate what a burden personality was and wish to shed it."
If we don’t suffer for our identities, then we haven’t earned them. Without the pain of initiation, we don’t really belong. We’re not genuinely who we claim to be. We’re tourists. We’re poseurs. Mitt Romney, for instance, is thought to be an inauthentic conservative because he hasn’t shown sufficient willingness to lose votes—and possibly elections—for the sake of his convictions. We can’t help but assume equivalence between cost and value. If your identity doesn’t entail some kind of cost, well, then it’s going to come off as cheap. So a lot of people play up, or even fabricate, the suffering in their lives.
What about Parks’ question? Are complex narratives necessary? Maybe, like identities, the narratives we tell, as well as the narratives we enjoy, work as costly signals, so that the complexity of the stories you like serves as a reliable indication of the complexity of your identity. If you can truly appreciate a complex novel, you can truly appreciate a complex individual. Maybe our complicated modern civilization, even with its tweets and Kindles, is more a boon than a hindrance to complexity and happiness. What this would mean is that if two people on the subway realize they’re both reading the same complex narrative they can be pretty sure they’re compatible as friends or lovers. Either that, or they’re both English professors and they have no idea what’s going on, in which case they’re still compatible but they’ll probably hate each other regardless.
At least, that's the impression I get from David Lodge's Small World, the latest complex narrative assigned in my English Lit course taught by a professor from an Eastern society.
Also read:
WHAT IS A STORY? AND WHAT ARE YOU SUPPOSED TO DO WITH ONE?
And:
HUNGER GAME THEORY: Post-Apocalyptic Fiction and the Rebirth of Humanity
We can’t help feeling strong positive emotions toward altruists. Katniss wins over readers and viewers the moment she volunteers to serve as tribute in place of her younger sister, whose name was picked in the lottery. What’s interesting, though, is that at several points in the story Katniss actually does engage in purely rational strategizing. She doesn’t attempt to help Peeta for a long time after she finds out he’s been wounded trying to protect her—why would she when they’re only going to have to fight each other in later rounds?
The appeal of post-apocalyptic stories stems from the joy of experiencing anew the birth of humanity. The renaissance never occurs in M.T. Anderson’s Feed, in which the main character is rendered hopelessly complacent by the entertainment and advertising beamed directly into his brain. And it is that very complacency, the product of our modern civilization's unfathomable complexity, that most threatens our sense of our own humanity. There was likely a time, though, when small groups composed of members of our species were beset by outside groups composed of individuals of a different nature, a nature that when juxtaposed with ours left no doubt as to who the humans were.
In Suzanne Collins’ The Hunger Games, Katniss Everdeen reflects on how the life-or-death stakes of the contest she and her fellow “tributes” are made to participate in can transform teenage boys and girls into crazed killers. She’s been brought to a high-tech mega-city from District 12, a mining town as quaint as the so-called Capitol is futuristic. Peeta Mellark, who was chosen by lottery as the other half of the boy-girl pair of tributes from the district, has just said to her, “I want to die as myself…I don’t want them to change me in there. Turn me into some kind of monster that I’m not.” Peeta also wants “to show the Capitol they don’t own me. That I’m more than just a piece in their Games.” The idea startles Katniss, who at this point is thinking of nothing but surviving the games—knowing full well that there are twenty-two more tributes and only one will be allowed to leave the arena alive. Annoyed by Peeta’s pronouncement of a higher purpose, she thinks,
We will see how high and mighty he is when he’s faced with life and death. He’ll probably turn into one of those raging beast tributes, the kind who tries to eat someone’s heart after they’ve killed them. There was a guy like that a few years ago from District 6 called Titus. He went completely savage and the Gamemakers had to have him stunned with electric guns to collect the bodies of the players he’d killed before he ate them. There are no rules in the arena, but cannibalism doesn’t play well with the Capitol audience, so they tried to head it off. (141-3)
Cannibalism is the ultimate relinquishing of the mantle of humanity because it entails denying the humanity of those being hunted for food. It’s the most basic form of selfishness: I kill you so I can live.
The threat posed to humanity by hunger is also the main theme of Cormac McCarthy’s The Road, the story of a father and son wandering around the ruins of a collapsed civilization. The two routinely search abandoned houses for food and supplies, and in one they discover a bunch of people locked in a cellar. The gruesome clue to the mystery of why they’re being kept is that some have limbs amputated. The men keeping them are devouring the living bodies a piece at a time. After a harrowing escape, the boy, understandably disturbed, asks, “They’re going to kill those people, arent they?” His father, trying to protect him from the harsh reality, answers yes, but tries to be evasive, leading to this exchange:
Why do they have to do that?
I dont know.
Are they going to eat them?
I dont know.
They’re going to eat them, arent they?
Yes.
And we couldnt help them because then they’d eat us too.
Yes.
And that’s why we couldnt help them.
Yes.
Okay.
But of course it’s not okay. After they’ve put some more distance between them and the human abattoir, the boy starts to cry. His father presses him to explain what’s wrong:
Just tell me.
We wouldnt ever eat anybody, would we?
No. Of course not.
Even if we were starving?
We’re starving now.
You said we werent.
I said we werent dying. I didn’t say we werent starving.
But we wouldnt.
No. We wouldnt.
No matter what.
No. No matter what.
Because we’re the good guys.
Yes.
And we’re carrying the fire.
And we’re carrying the fire. Yes.
Okay. (127-9)
And this time it actually is okay because the boy, like Peeta Mellark, has made it clear that if the choice is between dying and becoming a monster he wants to die.
This preference for death over depredation of others is one of the hallmarks of humanity, and it poses a major difficulty for economists and evolutionary biologists alike. How could this type of selflessness possibly evolve?
John von Neumann, one of the founders of game theory, served an important role in developing the policies that have so far prevented the real life apocalypse from taking place. He is credited with the strategy of Mutually Assured Destruction, or MAD (he liked amusing acronyms), that prevailed during the Cold War. As the name implies, the goal was to assure the Soviets that if they attacked us everyone would die. Since the U.S. knew the same was true of any of our own plans to attack the Soviets, a tense peace, or Cold War, was the inevitable result. But von Neumann was not at all content with this peace. He devoted his twilight years to pushing for the development of Intercontinental Ballistic Missiles (ICBMs) that would allow the U.S. to bomb Russia without giving the Soviets a chance to respond. In 1950, he made the infamous remark that inspired Dr. Strangelove: “If you say why not bomb them tomorrow, I say, why not today. If you say today at five o’clock, I say why not one o’clock?”
Von Neumann’s eagerness to hit the Russians first was based on the logic of game theory, and that same logic is at play in The Hunger Games and other post-apocalyptic fiction. The problem with cooperation, whether between rival nations or between individual competitors in a game of life-or-death, is that it requires trust—and once one player begins to trust the other he or see becomes vulnerable to exploitation, the proverbial stab in the back from the person who’s supposed to be watching it. Game theorists model this dynamic with a thought experiment called the Prisoner’s Dilemma. Imagine two criminals are captured and taken to separate interrogation rooms. Each criminal has the option of either cooperating with the other criminal by remaining silent or betraying him or her by confessing. Here’s a graph of the possible outcomes:
No matter what the other player does, each of them achieves a better outcome by confessing. Von Neumann saw the standoff between the U.S. and the Soviets as a Prisoner’s Dilemma; by not launching nukes, each side was cooperating with the other. Eventually, though, one of them had to realize that the only rational thing to do was be the first to defect.
But the way humans play games is a bit different. As it turned out, von Neumann was wrong about the game theory implications of the Cold War—neither side ever did pull the trigger; both prisoners kept their mouth shut. In Collins' novel, Katniss faces a Prisoner's Dilemma every time she encounters another tribute who may be willing to team up with her in the hunger game. The graph for her and Peeta looks like this:
In the context of the hunger games, then, it makes sense to team up with rivals as long as they have useful skills, knowledge, or strength. Each tribute knows, furthermore, that as long as he or she is useful to a teammate, it would be irrational for that teammate to defect.
The Prisoner’s Dilemma logic gets much more complicated when you start having players try to solve it over multiple rounds of play. Game theorists refer to each time a player has to make a choice as an iteration. And to model human cooperative behavior you have to not only have multiple iterations but also find a way to factor in each player’s awareness of how rivals have responded to the dilemma in the past. Humans have reputations. Katniss, for instance, doesn’t trust the Career tributes because they have a reputation for being ruthless. She even begins to suspect Peeta when she sees that he’s teamed up with the Careers. (His knowledge of Katniss is a resource to them, but he’s using that knowledge in an irrational way—to protect her instead of himself.) On the other hand, Katniss trusts Rue because she's young and dependent—and because she comes from an adjacent district not known for sending tributes who are cold-blooded.
When you have multiple iterations and reputations, you also open the door for punishments and rewards. At the most basic level, people reward those who they witness cooperating by being more willing to cooperate with them. As we read or watch The Hunger Games, we can actually experience the emotional shift that occurs in ourselves as we witness Katniss’s cooperative behavior.
People punish those who defect by being especially reluctant to trust them. At this point, the analysis is still within the realm of the purely selfish and rational. But you can’t stay in that realm for very long when you’re talking about the ways humans respond to one another.
Each time Katniss encounters another tribute in the games she faces a Prisoner’s Dilemma. Until the final round, the hunger games are not a zero-sum contest—which means that a gain for one doesn’t necessarily mean a loss for the other. Ultimately, of course, Katniss and Peeta are playing a zero-sum game; since only one tribute can win, one of any two surviving players at the end will have to kill the other (or let him die). Every time one tribute kills another, the math of the Prisoner’s Dilemma has to be adjusted. Peeta, for instance, wouldn’t want to betray Katniss early on, while there are still several tributes trying to kill them, but he would want to balance the benefits of her resources with whatever advantage he could gain from her unsuspecting trust—so as they approach the last few tributes, his temptation to betray her gets stronger. Of course, Katniss knows this too, and so the same logic applies for her.
As everyone who’s read the novel or seen the movie knows, however, this isn’t how either Peeta or Katniss plays in the hunger games. And we already have an idea of why that is: Peeta has said he doesn’t want to let the games turn him into a monster. Figuring out the calculus of the most rational decisions is well and good, but humans are often moved by their emotions—fear, affection, guilt, indebtedness, love, rage—to behave in ways that are completely irrational—at least in the near term. Peeta is in love with Katniss, and though she doesn’t really quite trust him at first, she proves willing to sacrifice herself in order to help him survive. This goes well beyond cooperation to serve purely selfish interests.
Many evolutionary theorists believe that at some point in our evolutionary history, humans began competing with each other to see who could be the most cooperative. This paradoxical idea emerges out of a type of interaction between and among individuals called costly signaling. Many social creatures must decide who among their conspecifics would make the best allies. And all sexually reproducing animals have to have some way to decide with whom to mate. Determining who would make the best ally or who would be the fittest mate is so important that only the most reliable signals are given any heed. What makes the signals reliable is their cost—only the fittest can afford to engage in costly signaling. Some animals have elaborate feathers that are conspicuous to predators; others have massive antlers. This is known as the handicap principle. In humans, the theory goes, altruism somehow emerged as a costly signal, so that the fittest demonstrate their fitness by engaging in behaviors that benefit others to their own detriment. The boy in The Road, for instance, isn’t just upset by the prospect of having to turn to canibalism himself; he’s sad that he and his father weren’t able to help the other people they found locked in the cellar.
We can’t help feeling strong positive emotions toward altruists. Katniss wins over readers and viewers the moment she volunteers to serve as tribute in place of her younger sister, whose name was picked in the lottery. What’s interesting, though, is that at several points in the story Katniss actually does engage in purely rational strategizing. She doesn’t attempt to help Peeta for a long time after she finds out he’s been wounded trying to protect her—why would she when they’re only going to have to fight each other in later rounds? But when it really comes down to it, when it really matters most, both Katniss and Peeta demonstrate that they’re willing to protect one another even at a cost to themselves.
The birth of humanity occurred, somewhat figuratively, when people refused to play the game of me versus you and determined instead to play us versus them. Humans don’t like zero-sum games, and whenever possible they try to change to the rules so there can be more than one winner. To do that, though, they have to make it clear that they would rather die than betray their teammates. In The Road, the father and his son continue to carry the fire, and in The Hunger Games Peeta gets his chance to show he’d rather die than be turned into a monster. By the end of the story, it’s really no surprise what Katniss choses to do either. Saving her sister may not have been purely altruistic from a genetic standpoint. But Peeta isn’t related to her, nor is he her only—or even her most eligible—suitor. Still, her moments of cold strategizing notwithstanding, we've had her picked as an altruist all along.
Of course, humanity may have begun with the sense that it’s us versus them, but as it’s matured the us has grown to encompass an ever wider assortment of people and the them has receded to include more and more circumscribed groups of evil-doers. Unfortunately, there are still all too many people who are overly eager to treat unfamiliar groups as rival tribes, and all too many people who believe that the best governing principle for society is competition—the war of all against all. Altruism is one of the main hallmarks of humanity, and yet some people are simply more altruistic than others. Let’s just hope that it doesn’t come down to us versus them…again.
Also read
And:
Life's White Machine: James Wood and What Doesn't Happen in Fiction
For James Wood, fiction is communion. This view has implications about what constitutes the best literature—all the elements from description to dialogue should work to further the dramatic development of the connection between reader and character.
No one is a better reader of literary language than James Wood. In his reviews, he conveys with grace and precision his uncanny feel for what authors set out to say, what they actually end up saying, and what any discrepancy might mean for their larger literary endeavor. He effortlessly and convincingly infers from the lurch of faulty lines the confusions and pretentions and lacuna in understanding of struggling writers. Some take steady aim at starkly circumscribed targets, his analysis suggests, while others, desperate to achieve some greater, more devastating impact, shoot wistfully into the clouds. He can even listen to the likes of republican presidential nominee Rick Santorum and explain, with his seemingly eidetic knowledge of biblical history, what is really meant when the supposed Catholic uses the word steward.
As a critic, Wood’s ability to see character in narration and to find the author, with all his conceits and difficulties, in the character is often downright unsettling. For him there exists no divide between language and psychology—literature is the struggle of conflicted minds to capture the essence of experiences, their own and others’.
When Robert Browning describes the sound of a bird singing its song twice over, in order to ‘recapture/ The first fine careless rapture,’ he is being a poet, trying to find the best poetic image; but when Chekhov, in his story ‘Peasants,’ says that a bird’s cry sounded as if a cow had been locked up in a shed all night, he is being a fiction writer: he is thinking like one of his peasants. (24)
This is from Wood’s How Fiction Works. In the midst of a long paean to the power of free indirect style, the technique that allows the language of the narrator to bend toward and blend with the thoughts and linguistic style of characters—moving in and out of their minds—he deigns to mention, in a footnote, an actual literary theory, or rather Literary Theory. Wood likes Nabokov’s scene in the novel Pnin that has the eponymous professor trying to grasp a nutcracker in a sink full of dishes. The narrator awkwardly calls it a “leggy thing” as it slips through his grasp. “Leggy” conveys the image. “But ‘thing’ is even better, precisely because it is vague: Pnin is lunging at the implement, and what word in English better conveys a messy lunge, a swipe at verbal meaning, than ‘thing’?” (25) The vagueness makes of the psychological drama a contagion. There could be no symbol more immediately felt.
The Russian Formalists come into Wood’s discussion here. Their theory focused on metaphors that bring about an “estranging” or “defamiliarizing” effect. Wood would press them to acknowledge that this making strange of familiar objects and experiences works in the service of connecting the minds of the reader with the mind of the character—it’s anything but random:
But whereas the Russian Formalists see this metaphorical habit as emblematic of the way that fiction does not refer to reality, is a self-enclosed machine (such metaphors are the jewels of the author’s freakish, solipsistic art), I prefer the way that such metaphors, as in Pnin’s “leggy thing,” refer deeply to reality: because they emanate from the characters themselves, and are fruits of free indirect style. (26)
Language and words and metaphors, Wood points out, by their nature carry us toward something that is diametrically opposed to collapsing in on ourselves. Indeed, there is something perverse about the insistence of so many professional scholars devoted to the study of literature that the main thrust of language is toward some unacknowledged agenda of preserving an unjust status quo—with the implication that the only way to change the world is to torture our modes of expression, beginning with literature (even though only a tiny portion of most first world populations bother to read any).
For Wood, fiction is communion. This view has implications about what constitutes the best literature—all the elements from description to dialogue should work to further the dramatic development of the connection between reader and character. Wood even believes that the emphasis on “round” characters is overstated, pointing out that many of the most memorable—Jean Brodie, Mr. Biswas—are one-dimensional and unchanging. Nowhere in the table of contents of How Fiction Works, or even in the index, does the word plot appear. He does, however, discuss plot in his response to postmodernists’ complaints about realism. Wood quotes author Rick Moody:
It’s quaint to say so, but the realistic novel still needs a kick in the ass. The genre, with its epiphanies, its rising action, its predictable movement, its conventional humanisms, can still entertain and move us on occasion, but for me it’s politically and philosophically dubious and often dull. Therefore, it needs a kick in the ass.
Moody is known for a type of fiction that intentionally sabotages the sacred communion Wood sees as essential to the experience of reading fiction. He begins his response by unpacking some of the claims in Moody’s fussy pronouncement:
Moody’s three sentences efficiently compact the reigning assumptions. Realism is a “genre” (rather than, say, a central impulse in fiction-making); it is taken to be mere dead convention, and to be related to a certain kind of traditional plot, with predictable beginnings and endings; it deals in “round” characters, but softly and piously (“conventional humanisms”); it assumes that the world can be described, with a naively stable link between word and referent (“philosophically dubious”); and all this will tend toward a conservative or even oppressive politics (“politically… dubious”).
Wood begins the section following this analysis with a one-sentence paragraph: “This is all more or less nonsense” (224-5) (thus winning my devoted readership).
That “more or less” refers to Wood’s own frustrations with modern fiction. Conventions, he concedes, tend toward ossification, though a trope’s status as a trope, he maintains, doesn’t make it untrue. “I love you,” is the most clichéd sentence in English. That doesn’t nullify the experience of falling in love (236). Wood does believe, however, that realistic fiction is too eventful to live up to the label.
Reviewing Ben Lerner’s exquisite short novel Leaving the Atocha Station, Wood lavishes praise on the postmodernist poet’s first work of fiction. He writes of the author and his main character Adam Gordon,
Lerner is attempting to capture something that most conventional novels, with their cumbersome caravans of plot and scene and "conflict," fail to do: the drift of thought, the unmomentous passage of undramatic life. Several times in the book, he describes this as "that other thing, the sound-absorbent screen, life’s white machine, shadows massing in the middle distance… the texture of et cetera itself." Reading Tolstoy, Adam reflects that even that great master of the texture of et cetera itself was too dramatic, too tidy, too momentous: "Not the little miracles and luminous branching injuries, but the other thing, whatever it was, was life, and was falsified by any way of talking or writing or thinking that emphasized sharply localized occurrences in time." (98)
Wood is suspicious of plot, and even of those epiphanies whereby characters are rendered dynamic or three-dimensional or “round,” because he seeks in fiction new ways of seeing the world he inhabits according to how it might be seen by lyrically gifted fellow inhabitants. Those “cumbersome caravans of plot and scene and ‘conflict’" tend to be implausible distractions, forcing the communion into narrow confessionals, breaking the spell.
As a critic who has garnered wide acclaim from august corners conferring a modicum of actual authority, and one who's achieved something quite rare for public intellectuals, a popular following, Wood is (too) often criticized for his narrow aestheticism. Once he closes the door on goofy postmodern gimcrack, it remains closed to other potentially relevant, potentially illuminating cultural considerations—or so his detractors maintain. That popular following of his is, however, comprised of a small subset of fiction readers. And the disconnect between consumers of popular fiction and the more literary New Yorker subscribers speaks not just to the cultural issue of declining literacy or growing apathy toward fictional writing but to the more fundamental question of why people seek out narratives, along with the question Wood proposes to address in the title of his book, how does fiction work?
While Wood communes with synesthetic flaneurs, many readers are looking to have their curiosity piqued, their questing childhood adventurousness revived, their romantic and nightmare imaginings played out before them. “If you look at the best of literary fiction," Benjamin Percy said in an interview with Joe Fassler,
you see three-dimensional characters, you see exquisite sentences, you see glowing metaphors. But if you look at the worst of literary fiction, you see that nothing happens. Somebody takes a sip of tea, looks out the window at a bank of roiling clouds and has an epiphany.
The scene Percy describes is even more eventful than what Lerner describes as “life’s white machine”—it features one of those damn epiphanies. But Percy is frustrated with heavy-handed plots too.
In the worst of genre fiction, you see hollow characters, you see transparent prose, you see the same themes and archetypes occurring from book to book. If you look at the best of genre fiction, you see this incredible desire to discover what happens next.
The interview is part of Fessler’s post on the Atlantic website, “How Zombies and Superheroes Conquered Highbrow Fiction.” Percy is explaining the appeal of a new class of novel.
So what I'm trying to do is get back in touch with that time of my life when I was reading genre, and turning the pages so quickly they made a breeze on my face. I'm trying to take the best of what I've learned from literary fiction and apply it to the best of genre fiction, to make a kind of hybridized animal.
Is it possible to balance the two impulses: the urge to represent and defamiliarize, to commune, on the one hand, and the urge to create and experience suspense on the other? Obviously, if the theme you’re taking on is the struggle with boredom or the meaningless wash of time—white machine reminds me of a washer—then an incident-rich plot can only be ironic.
The solution to the conundrum is that no life is without incident. Fiction’s subject has always been births, deaths, comings-of-age, marriages, battles. I’d imagine Wood himself is often in the mood for something other than idle reflection. Ian McEwan, whose Atonement provides Wood an illustrative example of how narration brilliantly captures character, is often taken to task for overplotting his novels. Citing Henry James in a New Yorker interview with Daniel Zalewski to the effect that novels have an obligation to “be interesting,” McEwan admits finding “most novels incredibly boring. It’s amazing how the form endures. Not being boring is quite a challenge.” And if he thinks most novels are boring he should definitely stay away from the short fiction that gets published in the New Yorker nowadays.
A further implication of Wood’s observation about narration’s capacity for connecting reader to character is that characters who live eventful lives should inhabit eventful narratives. This shifts the issue of plot back to the issue of character, so the question is not what types of things should or shouldn’t happen in fiction but rather what type of characters do we want to read about? And there’s no question that literary fiction over the last century has been dominated by a bunch of passive losers, men and women flailing desperately about before succumbing to societal or biological forces. In commercial fiction, the protagonists beat the odds; in literature, the odds beat the protagonists.
There’s a philosophy at play in this dynamic. Heroes are thought to lend themselves to a certain view of the world, where overcoming sickness and poverty and cultural impoverishment is more of a rite of passage than a real gauge of how intractable those impediments are for nearly everyone who faces them. If audiences are exposed to too many tales of heroism, then hardship becomes a prop in personal development. Characters overcoming difficulties trivializes those difficulties. Winston Smith can’t escape O’Brien and Room 101 or readers won’t appreciate the true threat posed by Big Brother. The problem is that the ascent of the passive loser and the fiction of acquiescence don’t exactly inspire reform-minded action either.
Adam Gordon, the narrator of Leaving the Atocha Station, is definitely a loser. He worries all day that he’s some kind of impostor. He’s whiny and wracked with self-doubt. But even he doesn’t sit around doing nothing. The novel is about his trip to Spain. He pursues women with mixed success. He does readings of his poetry. He witnesses a terrorist attack. And these activities and events are interesting, as James insisted they must be. Capturing the feel of uneventful passages of time may be a worthy literary ambition, but most people seek out fiction to break up periods of nothingness. It’s never the case in real life that nothing is happening anyway—we’re at every instance getting older. I for one don’t find the prospect of spending time with people or characters who just sit passively by as that happens all that appealing.
In a remarkably lame failure of a lampoon in Harper's Colson Whitehead targets Wood's enthusiasm for Saul Bellow. And Bellow was indeed one of those impossibly good writers who could describe eating Corn Flakes and make it profound and amusing. Still, I'm a little suspicious of anyone who claims to enjoy (though enjoyment shouldn't be the only measure of literary merit) reading about the Bellow characters who wander around Chicago as much as reading about Henderson wandering around Africa.
Henderson: I'm actually looking forward to the next opportunity I get to hang out with that crazy bastard.
New Yorker's Talk of the Town Goes Sci-Fi
The “Talk of the Town” pieces in The New Yorker have a distinctive style. Here, I write a fictional one about a man who’s gradually replacing parts of himself to potentially become immortal.
Dept. of Neurotechnology
Undermin(d)ing Mortality
"Most people's first response," Michael Maytree tells me over lunch, "is, you know, of course I want to live forever." The topic of our conversation surprises me, as Maytree's fame hinges not on his longevity—as remarkable as his ninety-seven years makes him—but on his current status as record-holder for greatest proportion of manmade brain in any human. Maytree says according to his doctors his brain is around seventy percent prosthetic. (Most people with prosthetic brain parts bristle at the term "artificial," but Maytree enjoys the running joke of his wife's about any extraordinary aspect of his thinking apparatus being necessarily unreal.)
He goes on, "But then you have to ask yourself: Do I really want to live through the pain of grieving for people again and again? Is there enough to look forward to to make going on—and on and on—worthwhile?" He stops to take a long sip of his coffee while quickly scanning our fellow patrons in the diner on West 103rd. Only when his age is kept in mind does there seem anything unsettling about his sharp-eyed attunement. Within the spectrum of aging, Maytree could easily pass for a younger guy with poor skin resiliency.
"The question I find most troubling though is, will I, as I get really, really old, be able to experience things, particularly relationships, as…"—he rolls his right hand, still holding a forkful of couscous, as he searches for the mot juste—"as profoundly—or fulfillingly—as I did when I was younger." He smirks and adds, "Like when I was still in my eighties."
When we first sat down in the diner, I asked Maytree if he'd received much attention from anyone other than techies and fellow implantees. Aside from the never-ending cascade of questions posted on the MindFX website he helps run (www.mindfx.gov), which serves as something of a support group listserv for people with brain prostheses and their families, and the requisite visits to research labs, including the one he receives medical care from, he gets noticed very little. The question about his brain he finds most interesting, he says, comes up frequently at the labs.
"I'd thought about it before I got the last implant," he said. "It struck me when Dr. Branson"—Maytree's chief neurosurgeon—"told me when it was done I'd have something like seventy percent brain replacement. Well, if my brain is already mostly mechanical, it shouldn't be that much of a stretch to transfer the part that isn't into some sort of durable medium—and, viola, my mind would become immortal."
It turned out the laboratory where Branson performed the surgery, the latest ("probably not the last," Maytree says) in a series of replacements and augmentations that began with a treatment for an injury he sustained in combat while serving in Iran and continued as he purchased shares in several biotech and neural implant businesses and watched their value soar, already had a division devoted to work on this very prospect. Though the work is being kept secret, it seems Maytree would be a likely subject if experimental procedures are in the offing. Hence my follow-up question: "Would you do it?"
"Think of a friend you've made recently," he enjoins me now, putting down his fork so he can gesticulate freely. "Now, is that friendship comparable—I mean emotion-wise—with friendships you began as a child? Sometimes I think there's no comparison; relationships in childhood are much deeper. Is it the same with every experience?" He rests his right elbow on the table next to his plate and leans in. "Or is the difference just a trick of memory? I honestly don't know."
(Another favorite question of Maytree's: Are you conscious? He says people usually add, or at least imply, "I mean, like me," to clarify. "I always ask then, 'Are you conscious—I mean like you were five years ago?' Naturally they can't remember.")
Finally, he leans back again, looks off into space shaking his head. "It's hard to think about without getting lost in the philosophical…" He trails off a moment before continuing dreamily, with downcast eyes and absent expression. "But it's important because you kind of need to know if the new experiences are going to be worth the passing on of the old ones." And that's the crux of the problem.
"Of course," he says turning back to me with a fraught grin, "it all boils down to what's going on in the brain anyway."
—
Dennis Junk
The Adaptive Appeal of Bad Boys
From the intro to my master’s thesis where I explore the evolved psychological dynamics of storytelling and witnessing, with a special emphasis on the paradox that the most compelling characters are often less than perfect human beings. Why do audiences like Milton’s Satan, for instance? Why did we all fall in love with Tyler Durden from Fight Club? It turns out both of these characters give indications that they just may be more altruistic than they appear at first.
Excerpt from Hierarchies in Hell and Leaderless Fight Clubs: Altruism, Narrative Interest, and the Adaptive Appeal of Bad Boys, my master’s thesis
In a New York Times article published in the spring of 2010, psychologist Paul Bloom tells the story of a one-year-old boy’s remarkable response to a puppet show. The drama the puppets enacted began with a central character’s demonstration of a desire to play with a ball. After revealing that intention, the character roles the ball to a second character who likewise wants to play and so rolls the ball back to the first. When the first character rolls the ball to a third, however, this puppet snatches it up and quickly absconds. The second, nice puppet and the third, mean one are then placed before the boy, who’s been keenly attentive to their doings, and they both have placed before them a few treats. The boy is now instructed by one of the adults in the room to take a treat away from one of the puppets. Most children respond to the instructions by taking the treat away from the mean puppet, and this particular boy is no different. He’s not content with such a meager punishment, though, and after removing the treat he proceeds to reach out and smack the mean puppet on the head.
Brief stage shows like the one featuring the nice and naughty puppets are part of an ongoing research program lead by Karen Wynn, Bloom’s wife and colleague, and graduate student Kiley Hamlin at Yale University’s Infant Cognition Center. An earlier permutation of the study was featured on PBS’s Nova series The Human Spark(jump to chapter 5), which shows host Alan Alda looking on as an infant named Jessica attends to a puppet show with the same script as the one that riled the boy Bloom describes. Jessica is so tiny that her ability to track and interpret the puppets’ behavior on any level is impressive, but when she demonstrates a rudimentary capacity for moral judgment by reaching with unchecked joy for the nice puppet while barely glancing at the mean one, Alda—and Nova viewers along with him—can’t help but demonstrate his own delight. Jessica shows unmistakable signs of positive emotion in response to the nice puppet’s behaviors, and Alda in turn feels positive emotions toward Jessica. Bloom attests that “if you watch the older babies during the experiments, they don’t act like impassive judges—they tend to smile and clap during good events and frown, shake their heads and look sad during the naughty events” (6). Any adult witnessing the children’s reactions can be counted on to mirror these expressions and to feel delight at the babies’ incredible precocity.
The setup for these experiments with children is very similar to experiments with adult participants that assess responses to anonymously witnessed exchanges. In their research report, “Third-Party Punishment and Social Norms,” Ernst Fehr and Urs Fischbacher describe a scenario inspired by economic game theory called the Dictator Game. It begins with an experimenter giving a first participant, or player, a sum of money. The experimenter then explains to the first player that he or she is to propose a cut of the money to the second player. In the Dictator Game—as opposed to other similar game theory scenarios—the second player has no choice but to accept the cut from the first player, the dictator. The catch is that the exchange is being witnessed by a third party, the analogue of little Jessica or the head-slapping avenger in the Yale experiments. This third player is then given the opportunity to reward or punish the dictator. As Fehr and Fischbacher explain, “Punishment is, however, costly for the third party so a selfish third party will never punish” (3).
It turns out, though, that adults, just like the infants in the Yale studies, are not selfish—at least not entirely. Instead, they readily engage in indirect, or strong, reciprocity. Evolutionary literary theorist William Flesch explains that “the strong reciprocator punishes and rewards others for their behavior toward any member of the social group, and not just or primarily for their interactions with the reciprocator” (21-2). According to Flesch, strong reciprocity is the key to solving what he calls “the puzzle of narrative interest,” the mystery of why humans so readily and eagerly feel “anxiety on behalf of and about the motives, actions, and experiences of fictional characters” (7). The human tendency toward strong reciprocity reaches beyond any third party witnessing an exchange between two others; as Alda, viewers of Nova, and even readers of Bloom’s article in the Times watch or read about Wynn and Hamlin’s experiments, they have no choice but to become participants in the experiments themselves, because their own tendency to reward good behavior with positive emotion and to punish bad behavior with negative emotion is automatically engaged. Audiences’ concern, however, is much less with the puppets’ behavior than with the infants’ responses to it.
The studies of social and moral development conducted at the Infant Cognition Center pull at people’s heartstrings because they demonstrate babies’ capacity to behave in a way that is expected of adults. If Jessica had failed to discern between the nice and the mean puppets, viewers probably would have readily forgiven her. When older people fail to make moral distinctions, however, those in a position to witness and appreciate that failure can be counted on to withdraw their favor—and may even engage in some type of sanctioning, beginning with unflattering gossip and becoming more severe if the immorality or moral complacency persists. Strong reciprocity opens the way for endlessly branching nth–order reciprocation, so not only will individuals be considered culpable for offenses they commit but also for offenses they passively witness. Flesch explains,
Among the kinds of behavior that we monitor through tracking or through report, and that we have a tendency to punish or reward, is the way others monitor behavior through tracking or through report, and the way they manifest a tendency to punish and reward. (50)
Failing to signal disapproval makes witnesses complicit. On the other hand, signaling favor toward individuals who behave altruistically simultaneously signals to others the altruism of the signaler. What’s important to note about this sort of indirect signaling is that it does not necessarily require the original offense or benevolent act to have actually occurred. People take a proclivity to favor the altruistic as evidence of altruism—even if the altruistic character is fictional.
That infants less than a year old respond to unfair or selfish behavior with negative emotions—and a readiness to punish—suggests that strong reciprocity has deep evolutionary roots in the human lineage. Humans’ profound emotional engagement with fictional characters and fictional exchanges probably derives from a long history of adapting to challenges whose Darwinian ramifications were far more serious than any attempt to while away some idle afternoons. Game theorists and evolutionary anthropologists have a good idea what those challenges might have been: for cooperativeness or altruism to be established and maintained as a norm within a group of conspecifics, some mechanism must be in place to prevent the exploitation of cooperative or altruistic individuals by selfish and devious ones. Flesch explains,
Darwin himself had proposed a way for altruism to evolve through the mechanism of group selection. Groups with altruists do better as a group than groups without. But it was shown in the 1960s that, in fact, such groups would be too easily infiltrated or invaded by nonaltruists—that is, that group boundaries are too porous—to make group selection strong enough to overcome competition at the level of the individual or the gene. (5)
If, however, individuals given to trying to take advantage of cooperative norms were reliably met with slaps on the head—or with ostracism in the wake of spreading gossip—any benefits they (or their genes) might otherwise count on to redound from their selfish behavior would be much diminished. Flesch’s theory is “that we have explicitly evolved the ability and desire to track others and to learn their stories precisely in order to punish the guilty (and somewhat secondarily to reward the virtuous)” (21). Before strong reciprocity was driving humans to bookstores, amphitheaters, and cinemas, then, it was serving the life-and-death cause of ensuring group cohesion and sealing group boundaries against neighboring exploiters.
Game theory experiments that have been conducted since the early 1980s have consistently shown that people are willing, even eager to punish others whose behavior strikes them as unfair or exploitative, even when administering that punishment involves incurring some cost for the punisher. Like the Dictator Game, the Ultimatum Game involves two people, one of whom is given a sum of money and told to offer the other participant a cut. The catch in this scenario is that the second player must accept the cut or neither player gets to keep any money. “It is irrational for the responder not to accept any proposed split from the proposer,” Flesch writes. “The responder will always come out better by accepting than vetoing” (31). What the researchers discovered, though, was that a line exists beneath which responders will almost always refuse the cut. “This means they are paying to punish,” Flesch explains. “They are giving up a sure gain in order to punish the selfishness of the proposer” (31). Game theorists call this behavior altruistic punishment because “the punisher’s willingness to pay this cost may be an important part in enforcing norms of fairness” (31). In other words, the punisher is incurring a cost to him or herself in order to ensure that selfish actors don’t have a chance to get a foothold in the larger, cooperative group.
The economic logic notwithstanding, it seems natural to most people that second players in Ultimatum Game experiments should signal their disapproval—or stand up for themselves, as it were—by refusing to accept insultingly meager proposed cuts. The cost of the punishment, moreover, can be seen as a symbol of various other types of considerations that might prevent a participant or a witness from stepping up or stepping in to protest. Discussing the Three-Player Dictator Game experiments conducted by Fehr and Fischbacher, Flesch points out that strong reciprocity is even more starkly contrary to any selfish accounting:
Note that the third player gets nothing out of paying to reward or punish except the power or agency to do just that. It is highly irrational for this player to pay to reward or punish, but again considerations of fairness trump rational self-interest. People do pay, and pay a substantial amount, when they think that someone has been treated notably unfairly, or when they think someone has evinced marked generosity, to affect what they have observed. (33)
Neuroscientists have even zeroed in on the brain regions that correspond to our suppression of immediate self-interest in the service of altruistic punishment, as well as those responsible for the pleasure we take in anticipating—though not in actually witnessing—free riders meeting with their just deserts (Knoch et al. 829; Quevain et al. 1254). Outside of laboratories, though, the cost punishers incur can range from the risks associated with a physical confrontation to time and energy spent convincing skeptical peers a crime has indeed been committed.
Flesch lays out his theory of narrative interest in a book aptly titled Comeuppance:Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction. A cursory survey of mainstream fiction, in both blockbuster movies and best-selling novels, reveals the good guys versus bad guys dynamic as preeminent in nearly every plot, and much of the pleasure people get from the most popular narratives can quite plausibly be said to derive from the goodie prevailing—after a long, harrowing series of close calls and setbacks—while the baddie simultaneously gets his or her comeuppance. Audiences love to see characters get their just deserts. When the plot fails to deliver on this score, they walk away severely disturbed. That disturbance can, however, serve the author’s purposes, particularly when the goal is to bring some danger or injustice to readers’ or viewers’ attention, as in the case of novels like Orwell’s 1984. Plots, of course, seldom feature simple exchanges with meager stakes on the scale of game theory experiments, and heroes can by no means count on making it to the final scene both vindicated and rewarded—even in stories designed to give audiences exactly what they want. The ultimate act of altruistic punishment, and hence the most emotionally poignant behavior a character can engage in, is martyrdom. It’s no coincidence that the hero dies in the act of vanquishing the villain in so many of the most memorable books and movies.
If narrative interest really does emerge out of a propensity to monitor each other’s behaviors for signs of a capacity for cooperation and to volunteer affect on behalf of altruistic individuals and against selfish ones they want to see get their comeuppance, the strong appeal of certain seemingly bad characters emerges as a mystery calling for explanation. From England’s tradition of Byronic heroes like Rochester to America’s fascination with bad boys like Tom Sawyer, these characters win over audiences and stand out as perennial favorites even though at first blush they seem anything but eager to establish their nice guy bone fides. On the other hand, Rochester was eventually redeemed in Jane Eyre, and Tom Sawyer, though naughty to be sure, shows no sign whatsoever of being malicious. Tellingly, though, these characters, and a long list of others like them, also demonstrate a remarkable degree of cleverness: Rochester passing for a gypsy woman, for instance, or Tom Sawyer making fence painting out to be a privilege. One hypothesis that could account for the appeal of bad boys is that their badness demonstrates undeniably their ability to escape the negative consequences most people expect to result from their own bad behavior.
This type of demonstration likely functions in a way similar to another mechanism that many evolutionary biologists theorize must have been operating for cooperation to have become established in human societies, a process referred to as the handicap principle, or costly signaling. A lone altruist in any group is unlikely to fare well in terms of survival and reproduction. So the question arises as to how the minimum threshold of cooperators in a population was first surmounted. Flesch’s fellow evolutionary critic, Brian Boyd, in his book On the Origin of Stories, traces the process along a path from mutualism, or coincidental mutual benefits, to inclusive fitness, whereby organisms help others who are likely to share their genes—primarily family members—to reciprocal altruism, a quid pro quo arrangement in which one organism will aid another in anticipation of some future repayment (54-57). However, a few individuals in our human ancestry must have benefited from altruism that went beyond familial favoritism and tit-for-tat bartering.
In their classic book The Handicap Principal, Amotz and Avishag Zahavi suggest that altruism serves a function in cooperative species similar to the one served by a peacock’s feathers. The principle could also help account for the appeal of human individuals who routinely risk suffering consequences which deter most others. The idea is that conspecifics have much to gain from accurate assessments of each other’s fitness when choosing mates or allies. Many species have thus evolved methods for honestly signaling their fitness, and as the Zahavis explain, “in order to be effective, signals have to be reliable; in order to be reliable, signals have to be costly” (xiv). Peacocks, the iconic examples of the principle in action, signal their fitness with cumbersome plumage because their ability to survive in spite of the handicap serves as a guarantee of their strength and resourcefulness. Flesch and Boyd, inspired by evolutionary anthropologists, find in this theory of costly signaling the solution the mystery of how altruism first became established; human altruism is, if anything, even more elaborate than the peacock’s display.
Humans display their fitness in many ways. Not everyone can be expected to have the wherewithal to punish free-riders, especially when doing so involves physical conflict. The paradoxical result is that humans compete for the status of best cooperator. Altruism is a costly signal of fitness. Flesch explains how this competition could have emerged in human populations:
If there is a lot of between-group competition, then those groups whose modes of costly signaling take the form of strong reciprocity, especially altruistic punishment, will outcompete those whose modes yield less secondary gain, especially less secondary gain for the group as a whole. (57)
Taken together, the evidence Flesch presents suggests the audiences of narratives volunteer affect on behalf of fictional characters who show themselves to be altruists and against those who show themselves to be selfish actors or exploiters, experiencing both frustration and delight in the unfolding of the plot as they hope to see the altruists prevail and the free-riders get their comeuppance. This capacity for emotional engagement with fiction likely evolved because it serves as a signal to anyone monitoring individuals as they read or view the story, or as they discuss it later, that they are disposed either toward altruistic punishment or toward third-order free-riding themselves—and altruism is a costly signal of fitness.
The hypothesis emerging from this theory of social monitoring and volunteered affect to explain the appeal of bad boy characters is that their bad behavior will tend to redound to the detriment of still worse characters. Bloom describes the results of another series of experiments with eight-month-old participants:
When the target of the action was itself a good guy, babies preferred the puppet who was nice to it. This alone wasn’t very surprising, given that the other studies found an overall preference among babies for those who act nicely. What was more interesting was what happened when they watched the bad guy being rewarded or punished. Here they chose the punisher. Despite their overall preference for good actors over bad, then, babies are drawn to bad actors when those actors are punishing bad behavior. (5)
These characters’ bad behavior will also likely serve an obvious function as costly signaling; they’re bad because they’re good at getting away with it. Evidence that the bad boy characters are somehow truly malicious—for instance, clear signals of a wish to harm innocent characters—or that they’re irredeemable would severely undermine the theory. As the first step toward a preliminary survey, the following sections examine two infamous instances in which literary characters whose creators intended audiences to recognize as bad nonetheless managed to steal the show from the supposed good guys.
(Watch Hamlin discussing the research in an interview from earlier today.)
And check out this video of the experiments.
Also read:
SYMPATHIZING WITH PSYCHOS: WHY WE WANT TO SEE ALEX ESCAPE HIS FATE AS A CLOCKWORK ORANGE
LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME
SABBATH SAYS: PHILIP ROTH AND THE DILEMMAS OF IDEOLOGICAL CASTRATION
WHAT MAKES "WOLF HALL" SO GREAT?
PUTTING DOWN THE PEN: HOW SCHOOL TEACHES US THE WORST POSSIBLE WAY TO READ LITERATURE
LAB FLIES: JOSHUA GREENE’S MORAL TRIBES AND THE CONTAMINATION OF WALTER WHITE
Campaigning Deities: Justifying the ways of Satan
Why do readers tend to admire Satan in Milton’s Paradise Lost? It’s one of the instances where a nominally bad character garners more attention and sympathy than the good guy, a conundrum I researched through an evolutionary lens as part of my master’s thesis.
[Excerpt from Hierarchies in Hell and Leaderless Fight Clubs: Altruism, Narrative Interest, and the Adaptive Appeal of Bad Boys, my master’s thesis]
Milton believed Christianity more than worthy of a poetic canon in the tradition of the classical poets, and Paradise Lost represents his effort at establishing one. What his Christian epic has offered for many readers over the centuries, however, is an invitation to weigh the actions and motivations of immortals in mortal terms. In the story, God becomes a human king, albeit one with superhuman powers, while Satan becomes an upstart subject. As Milton attempts to “justify the ways of God to Man,” he is taking it upon himself simultaneously, and inadvertently, to justify the absolute dominion of a human dictator. One of the consequences of this shift in perspective is the transformation of a philosophical tradition devoted to parsing the logic of biblical teachings into something akin to a political campaign between two rival leaders, each laying out his respective platform alongside a case against his rival. What was hitherto recondite and academic becomes in Milton’s work immediate and visceral.
Keats famously penned the wonderfully self-proving postulate, “Axioms in philosophy are not axioms until they are proved upon our pulses,” which leaves open the question of how an axiom might be so proved. Milton’s God responds to Satan’s approach to Earth, and his foreknowledge of Satan’s success in tempting the original pair, with a preemptive defense of his preordained punishment of Man:
…Whose fault?
Whose but his own? Ingrate! He had of Me
All he could have. I made him just and right,
Sufficient to have stood though free to fall.
Such I created all th’ ethereal pow’rs
And spirits, both them who stood and who failed:
Freely they stood who stood and fell who fell.
Not free, what proof could they have giv’n sincere
Of true allegiance, constant faith or love
Where only what they needs must do appeared,
Not what they would? What praise could they receive?
What pleasure I from such obedience paid
When will and reason… had served necessity,
Not me? (3.96-111)
God is defending himself against the charge that his foreknowledge of the fall implies that Man’s decision to disobey was borne of something other than his free will. What choice could there have been if the outcome of Satan’s temptation was predetermined? If it wasn’t predetermined, how could God know what the outcome would be in advance? God’s answer—of course I granted humans free will because otherwise their obedience would mean nothing—only introduces further doubt. Now we must wonder why God cherishes Man’s obedience so fervently. Is God hungry for political power? If we conclude he is—and that conclusion seems eminently warranted—then we find ourselves on the side of Satan. It’s not so much God’s foreknowledge of Man’s fall that undermines human freedom; it’s God’s insistence on our obedience, under threat of God’s terrible punishment.
Milton faces a still greater challenge in his attempt to justify God’s ways “upon our pulses” when it comes to the fallout of Man’s original act of disobedience. The Son argues on behalf of Man, pointing out that the original sin was brought about through temptation. If God responds by turning against Man, then Satan wins. The Son thus argues that God must do something to thwart Satan: “Or shall the Adversary thus obtain/ His end and frustrate Thine?” (3.156-7). Before laying out his plan for Man’s redemption, God explains why punishment is necessary:
…Man disobeying
Disloyal breaks his fealty and sins
Against the high supremacy of Heav’n,
Affecting godhead, and so, losing all,
To expiate his treason hath naught left
But to destruction sacred and devote
He with his whole posterity must die. (3. 203-9)
The potential contradiction between foreknowledge and free choice may be abstruse enough for Milton’s character to convincingly discount: “If I foreknew/ Foreknowledge had no influence on their fault/ Which had no less proved certain unforeknown” (3.116-9). There is another contradiction, however, that Milton neglects to take on. If Man is “Sufficient to have stood though free to fall,” then God must justify his decision to punish the “whole posterity” as opposed to the individuals who choose to disobey. The Son agrees to redeem all of humanity for the offense committed by the original pair. His knowledge that every last human will disobey may not be logically incompatible with their freedom to choose; if every last human does disobey, however, the case for that freedom is severely undermined. The axiom of collective guilt precludes the axiom of freedom of choice both logically and upon our pulses.
In characterizing disobedience as a sin worthy of severe punishment—banishment from paradise, shame, toil, death—an offense he can generously expiate for Man by sacrificing the (his) Son, God seems to be justifying his dominion by pronouncing disobedience to him evil, allowing him to claim that Man’s evil made it necessary for him to suffer a profound loss, the death of his offspring. In place of a justification for his rule, then, God resorts to a simple guilt trip.
Man shall not quite be lost but saved who will,
Yet not of will in him but grace in me
Freely vouchsafed. Once more I will renew
His lapsed pow’rs though forfeit and enthralled
By sin to foul exorbitant desires.
Upheld by me, yet once more he shall stand
On even ground against his mortal foe,
By me upheld that he may know how frail
His fall’n condition is and to me owe
All his deliv’rance, and to none but me. (3.173-83)
Having decided to take on the burden of repairing the damage wrought by Man’s disobedience to him, God explains his plan:
Die he or justice must, unless for him
Some other as able and as willing pay
The rigid satisfaction, death for death. (3.210-3)
He then asks for a volunteer. In an echo of an earlier episode in the poem which has Satan asking for a volunteer to leave hell on a mission of exploration, there is a moment of hesitation before the Son offers himself up to die on Man’s behalf.
…On Me let thine anger fall.
Account Me Man. I for his sake will leave
Thy bosom and this glory next to Thee
Freely put off and for him lastly die
Well pleased. On Me let Death wreck all his rage! (3.37-42)
This great sacrifice, which is supposed to be the basis of the Son’s privileged status over the angels, is immediately undermined because he knows he won’t stay dead for long: “Yet that debt paid/ Thou wilt not leave me in the loathsome grave” (246-7). The Son will only die momentarily. This sacrifice doesn’t stack up well against the real risks and sacrifices made by Satan.
All the poetry about obedience and freedom and debt never takes on the central question Satan’s rebellion forces readers to ponder: Does God deserve our obedience? Or are the labels of good and evil applied arbitrarily? The original pair was forbidden from eating from the Tree of Knowledge—could they possibly have been right to contravene the interdiction? Since it is God being discussed, however, the assumption that his dominion requires no justification, that it is instead simply in the nature of things, might prevail among some readers, as it does for the angels who refuse to join Satan’s rebellion. The angels, after all, owe their very existence to God, as Abdiel insists to Satan. Who, then, are any of them to question his authority? This argument sets the stage for Satan’s remarkable rebuttal:
…Strange point and new!
Doctrine which we would know whence learnt: who saw
When this creation was? Remember’st thou
Thy making while the Maker gave thee being?
We know no time when we were not as now,
Know none before us, self-begot, self-raised
By our own quick’ning power…
Our puissance is our own. Our own right hand
Shall teach us highest deeds by proof to try
Who is our equal. (5.855-66)
Just as a pharaoh could claim credit for all the monuments and infrastructure he had commissioned the construction of, any king or dictator might try to convince his subjects that his deeds far exceed what he is truly capable of. If there’s no record and no witness—or if the records have been doctored and the witnesses silenced—the subjects have to take the king’s word for it.
That God’s dominion depends on some natural order, which he himself presumably put in place, makes his tendency to protect knowledge deeply suspicious. Even the angels ultimately have to take God’s claims to have created the universe and them along with it solely on faith. Because that same unquestioning faith is precisely what Satan and the readers of Paradise Lost are seeking a justification for, they could be forgiven for finding the answer tautological and unsatisfying. It is the Tree of Knowledge of Good and Evil that Adam and Eve are forbidden to eat fruit from. When Adam, after hearing Raphael’s recounting of the war in heaven, asks the angel how the earth was created, he does receive an answer, but only after a suspicious preamble:
…such commission from above
I have received to answer thy desire
Of knowledge with bounds. Beyond abstain
To ask nor let thine own inventions hope
Things not revealed which the invisible King
Only omniscient hath suppressed in night,
To none communicable in Earth or Heaven:
Enough is left besides to search and know. (7.118-125)
Raphael goes on to compare knowledge to food, suggesting that excessively indulging curiosity is unhealthy. This proscription of knowledge reminded Shelley of the Prometheus myth. It might remind modern readers of The Wizard of Oz—“Pay no attention to that man behind the curtain”—or to the space monkeys in Fight Club, who repeatedly remind us that “The first rule of Project Mayhem is, you do not ask questions.” It may also resonate with news about dictators in Asia or the Middle East trying to desperately to keep social media outlets from spreading word of their atrocities.
Like the protesters of the Arab Spring, Satan is putting himself at great risk by challenging God’s authority. If God’s dominion over Man and the angels is evidence not of his benevolence but of his supreme selfishness, then Satan’s rebellion becomes an attempt at altruistic punishment. The extrapolation from economic experiments like the ultimatum and dictator games to efforts to topple dictators may seem like a stretch, especially if humans are predisposed to forming and accepting positions in hierarchies, as a casual survey of virtually any modern organization suggests is the case.
Organized institutions, however, are a recent development in terms of human evolution. The English missionary Lucas Bridges wrote about his experiences with the Ona foragers in Tierra del Fuego in his 1948 book Uttermost Part of the Earth, and he expresses his amusement at his fellow outsiders’ befuddlement when they learn about the Ona’s political dynamics:
A certain scientist visited our part of the world and, in answer to his inquiries on this matter, I told him that the Ona had no chieftains, as we understand the word. Seeing that he did not believe me, I summoned Kankoat, who by that time spoke some Spanish. When the visitor repeated his question, Kankoat, too polite to answer in the negative, said: “Yes, senor, we, the Ona, have many chiefs. The men are all captains and all the women are sailors” (quoted in Boehm 62).
At least among Ona men, it seems there was no clear hierarchy. The anthropologist Richard Lee discovered a similar dynamic operating among the !Kung foragers of the Kalahari. In order to ensure that no one in the group can attain an elevated status which would allow him to dominate the others, several leveling mechanisms are in place. Lee quotes one of his informants:
When a young man kills much meat, he comes to think of himself as a chief or a big man, and he thinks of the rest of us as his servants or inferiors. We can’t accept this. We refuse one who boasts, for someday his pride will make him kill somebody. So we always speak of his meat as worthless. In this way we cool his heart and make him gentle. (quoted in Boehm 45)
These examples of egalitarianism among nomadic foragers are part of anthropologist Christopher Boehm’s survey of every known group of hunter-gatherers. His central finding is that “A distinctively egalitarian political style is highly predictable wherever people live in small, locally autonomous social and economic groups” (35-36). This finding bears on any discussion of human evolution and human nature because small groups like these constituted the whole of humanity for all but what amounts to the final instants of geological time.
Also read:
THE ADAPTIVE APPEAL OF BAD BOYS
SYMPATHIZING WITH PSYCHOS: WHY WE WANT TO SEE ALEX ESCAPE HIS FATE AS A CLOCKWORK ORANGE
A Lyrical Refutation of Thomas Kinkade
The story of human warmth defying the frigid, impersonal harshness of a colorless,lifeless cosmos—in trying desperately to please, just to please, those pictures offend—that’s only half the story.
FIRELIGHT ON A SNOWY NIGHT: A Lyrical Refutation
I was eager to get out when I saw the storm, the swarm of small shadowed blurs descending
in swerves to create
a limn of white, out into the soft glowing sky of a winter night, peering through the
dark as those blurs
become streaking dabs as they pass through spheres of yellow lamplight, countless, endlessly
falling, engulfing
those sad, drooping, fiery lenses depending on their stoic posts.
I think of those Thomas Kinkade pictures my mom loves so much—everybody’s mom
loves so much—
and I have to admit they almost manage to signal it, that feeling, that mood.
Cold, brutal, uncaring wind, and a blanketing blankness of white struggled through
by the yellow and orange
warm vibrant doings of unseen humans, those quaint stone bridges over unimaginably
frigid, deathly chilling water,
somehow in their quaintness, in their suggestion of, insistence on, human ingenuity,
human doggedness, those scenes
hold out the promise of an end to the coldness, an end to the white nothing that fails,
year after year, to blot out world.
Those pictures are lies though—in almost conveying the feeling, the mood, they
do it an injustice.
In willfully ignoring the barren, venous, upward clawing, fleshed branches that rake
the eerily luminescent wind-crowned sky,
and failing to find a symbol to adequately suggest the paradoxical pace of the flakes
falling, endlessly falling
through those yellow, orange spheres of light—hurried but hypnotically slow, frantic
but easily, gracefully falling,
adjusting their cant to invisible, unforeseen and unforeseeable forces.
The story of human warmth defying the frigid, impersonal harshness of a colorless,
lifeless cosmos—
in trying desperately to please, just to please, those pictures offend—that’s
only half the story.
The woman who lit the fire sending out its light through the windows, she’s aging—
every covering of snow
is another year in the ceaseless procession, and the man, who worked so doggedly
at building a scaffold
and laying the stones for that charming bridge, he’s beyond reach of the snow, two or three
generations gone since his generous feat.
The absence of heat is its own type of energy. The wet-lashing night air is charged with it,
like the pause after a breath
awaiting the inevitable inhale—but it holds off, and holds off. Inevitable? Meanwhile,
those charged particles
of shocking white, tiny, but with visible weight—they’d kiss your cheek if you
opened your coat
and you’d know you’d been kissed by someone not alive. The ceaseless falling
and steady accumulation,
hours and days and years—humans create watches and clocks to defy time, but
this relentless rolling over
of green to white, warm to cold, thrilling, rejuvenating spring to contemplative, resigned
autumn, this we watch helplessly,
hopefully, hurtling toward those homes so far beneath the snow.
The air is charged, every flake a tiny ghost—no tinier, though, than any of us merits—
haunting the slippery medium
of night we might glide through so slow, so effortless, so sealed up to keep in our warmth,
turned inward on ourselves.
The hush, the silent yawn, is haunted with humanity’s piled up heap of here and gone,
and haunted too with
our own homeless, friendless, impossibly frightening future.
The homes of neighbors friendly donning matching caps, alike in our mutual blanketing, our
mutual muting.
Those paintings of cozy lit houses in the winter harshness remind me of the juxtaposition
of fright and absence of true threat,
those opposites we feel when young, the trick, the gift of some masterful ghost story,
properly told in such a scene,
and this night, snow creaking underfoot like those ill-hinged doors opening all on
their own, raising chills,
this night is haunted too, but less with presence than with utter absence, here and gone,
all those troubled souls,
their existence of no more consequence than the intricacy suddenly annihilated as it
collides with the flesh
just beneath my eye, collides and instantly transforms into something more medium
than message and
no sooner lands than begins to evaporate.
Also read:
THE TREE CLIMBER: A STORY INSPIRED BY W.S. MERWIN
IN HONOR OF CHARLES DICKENS ON THE 200TH ANNIVERSARY OF HIS BIRTH