Tuesday, May 29, 2007

The new silent majority

Something seems a little out of whack between the mainstream media and the American people. Take the arguments of the past few days over former President Jimmy Carter’s remarks about the Bush administration and the consequences of its particular brand of foreign policy. Carter didn’t attack President Bush personally, but said that “as far as the adverse impact on the nation around the world, this administration has been the worst in history,” which can’t really be too far out of line with what many Americans think.

In coverage typical of much of the media, however, NBC Nightly News asked whether Carter had broken “an unwritten rule when commenting on the current president,” and portrayed Carter’s words — unfairly it seems — as a personal attack on President Bush. Fox News called it “unprecedented.” Yet as an article in this newspaper on Tuesday pointed out, “presidential scholars roll their eyes at the notion that former presidents do not speak ill of current ones.”

The pattern is familiar. Polls show that most Americans want our government to stop its unilateral swaggering, and to try to solve our differences with other nations through diplomacy. In early April, for example, when the speaker of the House, the Democrat Nancy Pelosi, visited Syria and met with President Bashar al-Assad, a poll had 64 percent of Americans in favor of negotiations with the Syrians. Yet this didn’t stop an outpouring of media alarm.

A number of CNN broadcasts — including one showing Pelosi with a head scarf beside the title “Talking with Terrorists?” — failed even to mention that several Republican congressmen had met with Assad two days before Pelosi did. The conventional wisdom on the principal television talk shows was that Pelosi had “messed up on this one” (in the words of NBC’s Matt Lauer), and that she and the Democrats would pay dearly for it.

So it must have been a great surprise when Pelosi’s approval ratings stayed basically the same after her visit, or actually went up a little.

Or take the matter of the impeachment of President Bush and Vice President Cheney. Most media figures seem to consider the very idea as issuing from the unhinged imaginations of a lunatic fringe. But according to a recent poll, 39 percent of Americans in fact support it, including 42 percent of independents.

A common explanation of this tendency toward distortion is that the beltway media has attended a few too many White House Correspondents’ Dinners and so cannot possibly cover the administration with anything approaching objectivity. No doubt the Republicans’ notoriously well-organized efforts in casting the media as having a “liberal bias” also have their intended effect in suppressing criticism.

But I wonder whether this media distortion also persists because it doesn’t meet with enough criticism, and if that’s partially because many Americans think that what they see in the major political media reflects what most other Americans really think – when actually it often doesn’t.

Psychologists coined the term “pluralistic ignorance” in the 1930s to refer to this type of misperception — more a social than an individual phenomenon — to which even smart people might fall victim. A study back then had surprisingly found that most kids in an all-white fraternity were privately in favor of admitting black members, though most assumed, wrongly, that their personal views were greatly in the minority. Natural temerity made each individual assume that he was the lone oddball.

A similar effect is common today on university campuses, where many students think that most other students are typically inclined to drink more than they themselves would wish to; researchers have found that many students indeed drink more to fit in with what they perceive to be the drinking norm, even though it really isn’t the norm. The result is an amplification of a minority view, which comes to seem like the majority view.

In pluralistic ignorance, as described by researchers Hubert O’Gorman and Stephen Garry in a 1976 paper published in Public Opinion Quarterly, “moral principles with relatively little popular support may exert considerable influence because they are mistakenly thought to represent the views of the majority, while normative imperatives actually favored by the majority may carry less weight because they are erroneously attributed to a minority.”

What is especially disturbing about the process is that it lends itself to control by the noisiest and most visible. Psychologists have noted that students who are the heaviest drinkers, for example, tend to speak out most strongly against proposed measures to curb drinking, and act as “subculture custodians” in support of their own minority views. Their strong vocalization can produce “false consensus” against such measures, as others, who think they’re part of the minority, keep quiet. As a consequence, the extremists gain influence out of all proportion to their numbers, while the views of the silent majority end up being suppressed. (The United States Department of Education has a brief page on the main ideas here.)

Think of the proposal to put a timetable on the withdrawal of troops from Iraq, supported, the latest poll says, by 60 percent of Americans, but dropped Tuesday from the latest war funding bill.

Over the past couple months, Glenn Greenwald at Salon.com has done a superb job of documenting what certainly seems like it might be a case of pluralistic ignorance among the major political media, many (though certainly not all) of whom often seem to act as “subculture custodians” of their own amplified minority views. Routinely, it seems, views that get expressed and presented as majority views aren’t really that at all.

In a typical example in March, NBC’s Andrea Mitchell reported that most Americans wanted to pardon Scooter Libby, saying that the polling “indicates that most people think, in fact, that he should be pardoned, Scooter Libby should be pardoned.” In fact, polls showed that only 18 percent then favored a pardon.

Mitchell committed a similar error in April, claiming that polling showed Nancy Pelosi to be unpopular with the American people, her approval rating being as low as the dismal numbers of former Republican Speaker Dennis Hastert just before the 2006 November elections. But in fact the polls showed Pelosi’s approval standing at about 50 percent, while Hastert’s had been 22 percent.

As most people get their news from the major outlets, these distortions – however they occur, whether intentionally or through some more innocuous process of filtering – almost certainly translate into a strongly distorted image in peoples’ minds of what most people across the country think. They contribute to making mainstream Americans feel as if they’re probably not mainstream, which in turn may make them less likely to voice their opinions.

One of the most common examples of pluralistic ignorance, of course, takes place in the classroom, where a teacher has just finished a dull and completely incomprehensible lecture, and asks if there are any questions. No hands go up, as everyone feels like the lone fool, even though no student actually understood a single word. It takes guts, of course, to admit total ignorance when you might just be the only one.

Last year, author Kristina Borjesson interviewed 21 prominent journalists for her book “Feet to the Fire,” about the run-up to the Iraq War. Her most notable impression was this:

“The thing that I found really profound was that there really was no consensus among this nation’s top messengers about why we went to war. [War is the] most extreme activity a nation can engage in, and if they weren’t clear about it, that means the public wasn’t necessarily clear about the real reasons. And I still don’t think the American people are clear about it.”

Yet in the classroom of our democracy, at least for many in the media, it still seems impolitic – or at least a little too risky – to raise one’s hand.

The golden rule in the human jungle

News of the past few days and weeks suggests a rather dismal view of humanity. Israelis and Palestinians are again firing rockets at each other. On the streets of Karachi, just over a week ago, Pakistani security forces stood by while 42 people were killed and many more injured at a rally for Iftikhar Mohammad Chaudhry, deposed Chief Justice of the Supreme Court and opponent of President Pervez Musharraf. In the United States, a company compiling data on consumers is making money by helping criminals steal the savings of thousands of retired Americans.

Violence, corruption and greed. What kind of people are we?

But counter all of that with this – a young man in Cleveland has pledged $1 million of his own money to establish scholarships for disadvantaged children. His name is Braylon Edwards, and, O.K., he’s an emerging star for the Cleveland Browns who makes more money in a year than most of us will in a lifetime, but still. He could have bought a yacht and a fleet of sparkling Humvees. Instead, he invested in the future of hundreds of people he doesn’t even know.

“To secure a positive future for our country,” an ESPN article quoted Edwards as saying, “we have to start with these kids. We have to support them.”

So maybe the news is more dismal than it needs to be. But a glance at my previous columns shows that I’ve fallen into a similar pattern, writing on racial prejudice, genocide and entrenched political polarization, while not mentioning the more positive sides of the social atom. Cynicism can be pushed too far, because pure and untainted human altruism really exists – and it’s something to which we should learn to pay a lot more attention.

In a classic experiment of modern behavioral science – one that is now familiar to many people – an experimenter gives one of two people some cash, say $50, and asks them to offer some of it (any amount they choose) to another person, who can either accept or reject the offer. If the second person accepts, the cash is shared out accordingly; if he or she rejects it, no one gets to keep anything.

If we were all self-interested and greedy, then the second person would always accept the offer, as getting something is clearly better than getting nothing. And the first person, knowing this, would offer as little as possible. But that’s most certainly not what happens.

Experiments across many cultures show that people playing this “ultimatum game” typically offer anything from 25 to 50 percent of the money, and reject offers less than around 25 percent, often saying they wanted to punish the person for making an unfair offer.

An important point that people often overlook about these experiments (and others like them) is that they’ve been performed very carefully, with participants remaining completely anonymous, and playing only once. Everything is set up so no one can have any hope of building a good reputation or of getting any kind of payback in the future in kind for their actions today.

So this really does seem to be pure altruism, and we do care about fairness, at least most of us.

That’s not to say, of course, that we’re not often self-interested, or that human kindness isn’t frequently strategic and aimed at currying favor in the future. The point is that it’s not always like that. People give to charity, tip waiters in countries they’ll never again visit, dive into rivers to save other people or even animals – or set aside $1 million to send poor kids to school – not because they hope to get something but, sometimes, out of the goodness of their hearts.

Social researchers have begun referring to this human tendency with the technical term “strong reciprocity,” which refers to a willingness to cooperate, and also to punish those who don’t cooperate, even when no gain is possible. And there’s an interesting theory as to why we’re like this.

In theoretical studies, economists and anthropologists have been exploring how self-interest and cooperation might have played out in our ancestral groups of hunter-gatherers. In interactions among individuals, it’s natural to suppose that purely self-interested people would tend to come out ahead, as they’d never get caught out helping others without getting help in return and would also be able to cheat any na├»ve altruists that come along.

But it is also natural to suppose that when neighboring groups compete with one another, the group with more altruists would have an advantage, as it would be better able to manage collective tasks – things like farming and hunting, providing for defense or caring for the sick – than a group of more selfish people.

So you can imagine a basic tension in the ancient world between individual interactions that favor self-interest and personal preservation, and group interactions that favor individual altruism. Detailed simulations suggest that if the group competition is strong enough, cooperators will persist because of their intense value to group cohesion.

But there’s slightly more to the story, too. Further work shows that groups really thrive if the altruists are of a special sort – not just people who are willing to cooperate with others, but who are also willing to punish those who they see failing to cooperate.

This work is only suggestive, but it raises the interesting idea that it’s a long history of often brutal competition among groups that has turned most of us into willing cooperators, or, more accurately, strong reciprocators. We’re not Homo economicus, as Herbert Gintis of the University of Amherst puts it, but Homo reciprocans – an organism biologically prone to cooperative actions, and for good historical reasons.

No doubt this is what many people probably thought all along, without the aid of any theory or computer simulations. It just goes to show how theorists can labor for years to re-discover the obvious. Then again, re-discovery often casts the familiar in a not-so-familiar light, and leads us to reconsider what we thought we already knew.

We’ve been so busy over the past half century glorifying the power of markets driven by self-interest that we’ve overlooked how many of our most important institutions depended not on self-interest but on something more akin to a cooperative public spirit. If an impulse toward cooperation rather than self-interest alone is the “natural” human condition, then we’ve been poor stewards of a powerful social resource for the collective good. The United States health care system, to take one example, has by design been set up around the profit motive, based on the belief that only this narrow motivator of individual action can be counted on to produce anything good. It’s perhaps no surprise that it is among the most expensive in the world, and far from the most effective.

In a press conference at the Cannes Film Festival, following a screening of his new film “Sicko,” Michael Moore criticized how financial interests play such a foundational role in health care in the United States. “It’s wrong and it’s immoral,” he said. “We have to take the profit motive out of health care. It’s as simple as that.”

But it’s not quite that simple. It’s not that profits shouldn’t play any role, because we are indeed motivated in part by self-interest. It’s just that we have other motivations, too, and helping others is one of those. We need to be just as open to the better parts of human nature as we are protective against the narrowly materialistic ones, whether we’re considering health care or anything else, including education.

You don’t need a new breed of experimental economists to tell you that. Just ask Braylon Edwards.

Thursday, May 17, 2007

The Prosecutor's Fallacy

Later this month – or it could be next month – a group of three judicial “wise men” in the Netherlands should finally settle the fate of a very unlucky woman named Lucia de Berk. A 45-year-old nurse, de Berk is currently in a Dutch prison, serving a life sentence for murder and attempted murder. The “wise men” – an advisory judicial committee known formally as the Posthumus II Commission – are reconsidering the legitimacy of her conviction four years ago.

Lucia is in prison, it seems, mostly because of human susceptibility to mathematical error – and our collective weakness for rushing to conclusions as a single-minded herd.

When a court first convicted her, the evidence seemed compelling. Following a tip-off from hospital administrators, investigators looked into a series of “suspicious” deaths or near deaths in hospital wards where de Berk had worked from 1999 to 2001, and they found that Lucia had been physically present when many of them took place. A statistical expert calculated that the odds were only 1 in 342 million that it could have been mere coincidence.

Open and shut case, right? Maybe not. A number of Dutch scientists now argue convincingly that the figure cited was incorrect and, worse, irrelevant to the proceedings, which were in addition plagued by numerous other problems.

For one, it seems that the investigators weren’t as careful as they might have been in collecting their data. When they went back, sifting through hospital records looking for suspicious cases, they classified at least some events as suspicious only after they realized that Lucia had been present. So the numbers that emerged were naturally stacked against her.

Mathematician Richard Gill of the University of Leiden, in the Netherlands, and others who have redone the statistical analysis to sort out this problem and others suggest that a more accurate number is something like 1 in 50, and that it could be as low as 1 in 5.

More seriously still – and here’s where the human mind really begins to struggle – the court, and pretty much everyone else involved in the case, appears to have committed a serious but subtle error of logic known as the prosecutor’s fallacy.

The big number reported to the court was an estimate (possibly greatly inflated) of the chance that so many suspicious events could have occured with Lucia present if she was in fact innocent. Mathematically speaking, however, this just isn’t at all the same as the chance that Lucia is innocent, given the evidence, which is what the court really wants to know.

To see why, suppose that police pick up a suspect and match his or her DNA to evidence collected at a crime scene. Suppose that the likelihood of a match, purely by chance, is only 1 in 10,000. Is this also the chance that they are innocent? It’s easy to make this leap, but you shouldn’t.

Here’s why. Suppose the city in which the person lives has 500,000 adult inhabitants. Given the 1 in 10,000 likelihood of a random DNA match, you’d expect that about 50 people in the city would have DNA that also matches the sample. So the suspect is only 1 of 50 people who could have been at the crime scene. Based on the DNA evidence only, the person is almost certainly innocent, not certainly guilty.

This kind of error is so subtle that the untrained human mind doesn’t deal with it very well, and worse yet, usually cannot even recognize its own inability to do so. Unfortunately, this leads to serious consequences, as the case of Lucia de Berk illustrates. Worse yet, our strong illusion of certainty in such matters can also lead to the systematic suppression of doubt, another shortcoming of the de Berk case.

Indeed, de Berk’s defense team presented other numbers that should have created serious doubt in the mind of the court, but apparently didn’t. When de Berk worked on the hospital wards in question, from 1999 to 2001, six suspicious deaths occurred. In the same wards, in a similar period of time before de Berk started working there, there were actually seven suspicious deaths.

If de Berk were a serial killer, it certainly would be bizarre that her presence would lead to a decrease in the overall number of deaths.

Of course, the de Berk case is hardly an isolated example of statistical error in the courtroom. In a famous case in the United Kingdom a few years ago, Sally Clark was found guilty of killing her two infants, largely on the basis of testimony given by Roy Meadows, a physician who told the court that the chance that the two both could have died from Sudden Infant Death Syndrome (SIDS) was only 1 in 73 million.

Meadows arrived at this number by squaring the estimated probability for one such death, which is an elementary mistake. Because SIDS may well have genetic links, the chance that a mother who already had one child die from SIDS would have a second one may be considerably higher.

Here, too, the prosecutor’s fallacy seems to have loomed large, as the likelihood of two SIDS deaths, whatever the number, is not the chance that the mother is guilty, though the court may have interpreted it as such.

Even our powerful intuitive belief that “common sense” is a reliable guide can be extremely dangerous. In Sally Clark’s first appeal, statistician Philip Dawid of University College London was called as an expert witness, but judges and lawyers ultimately decided not to take his advice, as the statistical matters in question were not, they decided, “rocket science.” The conviction was upheld on this appeal (although it was subsequently overturned).

Legal experts in the United States and the United Kingdom are taking some tentative steps to rectify this problem – by organizing further education in statistics for judges and lawyers, and by arranging for the use of special scientific panels in court. Still, it will remain difficult to counteract the timeless process of social amplification that can turn the opinions of a few, based on whatever reasoning, into the near certainty of the crowd.

In the wake of the impressive 1-in-342 million number, the Dutch press piled on de Berk, demonizing her as a cold, remorseless killer. They noted, as if it were somehow relevant, that she had suspiciously spent a number of years outside of the Netherlands, and had even worked for a time as a prostitute. Other “evidence” at the trial was an entry from de Berk’s diary, on the same date as one of the deaths, which said that she had “given in to her compulsion.” Elsewhere she wrote that she had “a very great secret,” which she insisted was reading Tarot cards, but the prosecution alleged, and many people believed, referred to her murdering patients.

What ensued was something akin to the Salem witch hunt. Throughout the trial, Lucia maintained her innocence. But the prosecution called an expert witness who testified that serial killers often refuse to confess. So her protestations became yet more evidence against her.

But now that the evidence has been called into question, social opinion, expressed most clearly in the press, has swung the other way. As Gill, the Leiden mathematician, said to me in an e-mail message, the media suddenly have begun pushing the view that maybe there’s been a miscarriage of justice.

“Suddenly we’re seeing real photos of Lucia de Berk as a normal person,” said Gill, “rather than as a kind of caricature of a modern witch. It’s a fascinating glimpse of group psychology, and a huge change seeded by a little bit of information at the right moment.”

In ordinary usage, “common sense” is taken to be something of value. Albert Einstein had a less charitable view. “Common sense,” he wrote, “is nothing more than a deposit of prejudices laid down by the mind before you reach age 18.”

Our ability as people to understand our habitual failings, both individually and socially, is a great part of what sets us apart from the rest of nature. We excel precisely insofar as we manage to use that ability. Sadly, in the legal setting at least, we still have lots of room for improvement.

In the case of Lucia de Berk, several Dutch scientists deserve enormous credit for their determined exploration of the way Lucia’s case was handled, and especially for exposing the flawed nature of the statistical arguments. Richard Gill has an extensive summary of the details of the case on the web. Ton Derksen, a Dutch philosopher of science, has written a book critical of the case. Both have submitted presentations to a Dutch committee of legal “wise men” which is now considering whether the case should be reopened.

Bias as usual? Or fair play?

David Stern, commissioner of the National Basketball Association, isn’t too happy with the recent revelation that N.B.A. referees appear to have a racial bias in the way they call fouls. He’s questioned the validity of the statistical analysis by Justin Wolfers, an assistant professor of business and public policy at the Wharton School, and Joseph Price, a Cornell graduate student in economics, which suggests that white and black referees call fouls preferentially against players of the opposite race. The N.B.A. insists that its own analysis (of different data) reveals no such bias, although other experts who’ve seen both analyses say that the Wolfers and Price study is more convincing.

But I wonder if Stern and the N.B.A. wouldn’t be better off with a different tactic. A finding of this kind may make for scandalous headlines, but it isn’t really all that surprising, and takes its place in an already sizable literature showing that racial bias often works more or less unconsciously, that even its perpetrators often remain completely unaware of it. The fact that it took sophisticated mathematical analysis even to identify the bias, and that neither N.B.A. fans nor players appear to perceive it, suggests that maybe the N.B.A. isn’t doing such a bad job (though not that they shouldn’t try harder).

Psychologists have demonstrated that even children as young as three years old already attribute lasting and special significance to skin color, which they see as more fundamental than clothing styles, for example, or profession. They’re not surprised that someone might change clothes or the kind of work they do, but do not consider it credible that someone might change his or her race.

Most adults seem to have similarly automatic responses to race, with built-in biases. A few years ago, for example, one brain-imaging study of adults found that both white and black subjects, when presented with faces of the other race, showed increased activity in the amygdala, a brain region that typically responds to stimuli that have emotional significance. The subjects, meanwhile, reported feeling no particular emotion in connection with the different faces. Another study, this one of white subjects only, found that those whose amygdalas were most active also scored highest on a standard test for racial prejudice.

These studies were and are highly controversial (what’s not in this area?), and the scientists behind them quite rightly argue that they should be interpreted with care. After all, having these unconscious reactions doesn’t mean that people will act on them.

Still, this evidence strongly suggests that racial differences influence human behavior at a very primitive and unconscious level. Hence, it’s hardly surprising that N.B.A. officials, making split-second decisions on rapidly moving events under tremendous pressure, might succumb – weakly, the statistics suggest – to some kind of automatic and unconscious bias.

That’s not to say that bias of this kind is acceptable, that it’s ineradicable and that we just have to live with it. Far from it. The point is that understanding our natural biases – and their sometimes counterintuitive origins – may ultimately give us a better chance to take measures to mitigate their damaging consequences.

For many years, psychologists thought that people categorize others automatically along three lines – by age, sex and race. Numerous experiments, including those already mentioned, seem to show as much. But six years ago, anthropologists Robert Kurzban, John Tooby and Leda Cosmides of the University of California, Santa Barbara, suggested that this idea has deep problems, at least when it comes to race. That’s because through most of evolutionary history, people were not exposed to other races. So there was no way for any kind of hard-wired racial sensitivity to evolve.

Instead, they argue, the experimental finding that people do categorize by race, automatically and rapidly, arises as a kind of accident. What our ancestors did need to be able to do – and there were potentially deadly consequences if they got it wrong – was to identify the groups or coalitions to which people belonged. Is the stranger walking into camp part of my tribe, or an enemy?

To make such decisions, the argument goes, people naturally latched onto whatever clues were available, including style of clothing or skin color. This would be little more than a nice theory if the three authors hadn’t also backed it up with some further experiments.

To test the idea, they had volunteers look at photos of blacks and whites, and manipulated cues that gave volunteers information about the peoples’ group affiliations, so that the apparent coalitions among them went not along racial divisions, but across them. In this case, they found, the power of racial categorization quickly decreased, as it was replaced by awareness of the true coalitional boundaries.

The conclusion was that racial awareness isn’t innate; group awareness is. Unfortunately, in today’s world, perceived coalitions often run along racial or ethnic lines, and so coalitional perception takes on the form of racial or ethnic perception.

A paradoxical element of this view is that racial discrimination emerges more or less from the same instincts that support natural group identification, which almost certainly played a huge role in helping our ancestors to form strong and cohesive groups. You’ve got to wonder, with these influences being so automatic, whether the N.B.A. is really doing worse than anyone else.

Perhaps bias in the N.B.A. has been discovered only because sports naturally generates the kind of data easily susceptible to mathematical analysis. Social researchers starved for good data naturally look for the low-hanging fruit first. The same influences may be far more pervasive, yet very difficult to demonstrate mathematically.

The really difficult question, for Stern and the rest of us, is what any organization or society can do about racial bias, given that it is so deeply rooted as to influence the behavior even of people who profess to be and truly believe they are fair-minded.

Friday, May 11, 2007

The roots of ethnic violence

One week ago, the International Criminal Court in The Hague issued arrest warrants on charges of war crimes and crimes against humanity for Ahmad Muhammad Harun, Sudan’s former interior minister who oversaw the Darfur Security Desk, and Ali Muhammad Ali Abd-al-Rahman, a leader of the Janjaweed militia. Mr. Harun is now Sudan’s minister of humanitarian affairs. The two men seem to be rather small fry compared to Omar Hassan al-Bashir, the Sudanese president, but it’s a step in the right direction. Too little, too late, of course, for the hundreds of thousands who’ve been tortured, raped and murdered in the Darfur region over the past four years.

The question, as always, is why African and Western governments have been so painfully slow in bringing pressure against the Sudanese government, which might have stopped the killing years ago. Unfortunately, the pattern is all too familiar: as Samantha Power documented in her powerful book “A Problem From Hell,” the history of genocide in this century is one of governments, including the United States, responding almost habitually with denial and delay, with excuses and inaction.

Why? Governments, no doubt, have their own secret and not always admirable reasons for staying out. Most people, I suppose, find it hard even to imagine these things happening, as they lie so far outside our personal experience.

But the failure also stems, I suspect, from a deep misunderstanding of the origins of ethnically-targeted violence – and of the way individuals in the right positions can exploit the power of social patterns for their own selfish ends. From the Turkish slaughter of Armenians in the early 20th century through Rwanda and the former Yugoslavia, genocide has erupted almost never as a spontaneous orgy of mass killing, but always a result of political calculation and orchestration.

Before the massacre in Rwanda, for example, radio stations and newspapers owned by a handful of government officials began referring to the Tutsi as “subhuman.” The government funded and organized radical Hutu groups that amassed weapons and trained people as killers.

There was no spontaneous social uprising. The fate of one million Tutsis had been planned and prepared by April 6, 1994, when a plane carrying the Rwandan president and the Hutu president of Burundi was shot down in Kigali.

Ten years ago, a group of noted experts in sociology and psychology met in Northern Ireland to discuss the roots of genocide, and they concluded that most of the ethnopolitically motivated killings in the 20th century could be traced not to “spontaneous popular actions,” but to “dominant state elites trying to maintain their nations’ unity and their own monopoly on power.”

But this understanding is complete only if we see that leaders aren’t able to do anything they like. They wield power by stirring up social patterns and directing them for their own ends. Unfortunately, we humans are in many ways almost pre-programmed to make this possible.

In natural ethnic groups, of course, ethnocentrism is a universal phenomenon, with people everywhere convinced in no uncertain terms that their culture is superior. This isn’t just a characteristic of the uneducated; ethnocentrism presumably finds its roots in the structure of the human mind, and in the evolutionary experience of our hunter-gatherer ancestors who lived in small groups.

More surprising yet, a readiness for group-level prejudice may also be linked to the collective mechanisms that support human cooperation. On the basis of computer simulations in recent years, political scientists Ross Hammond of the Brookings Institute and Robert Axelrod of the University of Michigan suggest that in especially primitive social conditions – when people have limited opportunities to use reputation or character as a guide to someone’s trustworthiness – raw prejudice, based on crude ethnic markers, can actually be beneficial.

Roughly speaking, Hammond and Axelrod found that ethnic prejudice – again, only in these primitive conditions – helps to organize people into homogeneous ethnic groups within which interaction is easy and mostly cooperative, while minimizing the more difficult and often costly interactions between groups. Without prejudiced behavior, you would not get as much cooperation.

The conclusion is that ethnocentrism, while it may be ugly, also may be effective. The researchers suggest that it may be no coincidence that ethnic divisions and tensions tend to become enhanced and more influential in conditions of economic
collapse or when war tears apart the social fabric.

Sadly, it is our innate preparedness for ethnocentric behavior that politicians such as Slobodan Milosevic – or President Bashir today – use so effectively. In 2003, when various African tribes began rebelling in Darfur, Bashir responded by arming irregular militias and directing them to attack black civilians indiscriminately. Many of the militiamen were of Arabic descent, driven and motivated by powerful ethnic prejudice.

Historians once argued over whether individuals or social forces control history. The truth is that both are important. An individual can “pluck the strings” of a social pattern much as a musician would those of a guitar, and wield power far beyond his own personal being.

This abstract understanding does nothing to lessen the unspeakable suffering of the millions of people with names and faces and dreams whose lives have been and are being ruined or cut short. But it should alert us that genocide is not the direct consequence of unstoppable age-old hatreds, but that of people in power who use their influence to stoke hatred for strategic purposes.

It is only these people who need to be stopped.

How order creates itself

An anonymous commenter in the New York Times, responding to my previous columns, suggested that my title “Our Lives as Atoms” (this is the title under which these posts appear in the NYT) is "more than a little puzzling,” and wondered “Where will all this lead us?” I’ve written about the amplified polarization of opinion in the political blogs, and about the abuse at Abu Ghraib prison, which had a disturbingly eerie resemblance to famous experiments at Stanford University 36 years ago. What does any of this have to do with atoms? Fair question. I’d like to start my answer by telling you about a strange phenomenon in Spitsbergen.

Spitsbergen is a Norwegian island in the Svalbard archipelago. It has spectacular mountains, abundant glaciers and desolate tundra, where stones are littered over a flat and mostly featureless terrain. In places on this tundra, the stones are arranged in a remarkable way; they lie not in a chaotic, haphazard jumble but in an ordered array of hauntingly beautiful, nearly perfect circular piles. You really have to look at a photo to believe it.

Where do these stone circles come from? Some might suspect the activity of intelligent agents, the local people, perhaps. But in fact, these circular piles arise all on their own by a natural process, with no human intervention. As geophysicists Mark Kessler and Brad Werner first explained a few years ago, forces associated with freezing and thawing push the stones into stretched-out piles, and then curve the long piles around (at least sometimes) to form complete rings.

Our human intuition is ill-prepared to understand such “spontaneous order” in the physical world. Suppose you put some sand in a shallow box with a lid and shake it up and down. Will anything interesting happen? Most people think not. But when physicist Harry Swinney and colleagues at the University of Texas at Austin did this experiment a few years ago – O.K., they used millions of tiny ball bearings rather than sand – they found something very surprising. When the frequency of shaking speeds up and exceeds a certain limit, beautiful wave-like patterns form in the box.

Such spontaneous order is caused by feedback. A little pattern, even if it arises quite by accident, sets up forces that reinforce the pattern. A little clumping of stones on the tundra triggers physical reactions that lead to more clumping, and eventually to stone piles.

This still says nothing about what our lives have to do with atoms. But bear with me a tiny bit further.

The physical world contains lots of kinds of stuff – liquids and solids, metals that conduct electricity and rubber that doesn’t, semiconductors and superconductors, liquid crystals and magnets. These things are made of different kinds of atoms, but that’s not the only reason why they’re different from one another. One of the most important lessons of modern physics is that the way things are organized sometimes matters more than what they are made of. The same carbon atoms that make soft, dull graphite also make sparkling and super-hard diamond. Organization matters.

Which returns me to the mysterious title, “Our Lives as Atoms.” It may be a crude analogy, but people are akin to “atoms” in that we are the elementary building blocks of the social world. Although we tend to think of ourselves as individuals making up our own minds, we’re obviously influenced by what others around us do. Social patterns routinely emerge that have little to do with the character of individual people.

In June 2000, on the afternoon of the opening of the London Millenium Bridge, the first pedestrian bridge built across the Thames River in central London for more than a century, a policemen noticed the bridge begin to sway from side to side. Authorities quickly herded the people off and shut the bridge. What had caused the problem? The best explanation is that people’s feet, simply by walking, had set up a weak vibration in the bridge, a gentle swaying, which created feedback.

To keep their balance, people found it easier to adjust their gait and walk in time with the sway. But this amplified the motion. The more the bridge swayed, the more people adjusted their gait, making the bridge sway even more, until it was swinging several inches to either side. It was spontaneous order indeed, and of a rather dangerous kind.

This is a nice metaphor for how individual actions work together to create larger social forces. We don’t ordinarily think this way, I suspect, because our inner voice usually explains things in terms of narratives that refer to peoples intentions, thought processes and so on.

But it would be a very strange world indeed if the basic logic of spontaneous order didn’t affect us – possibly at many levels – in the way we think, the opinions we have, the clothes we wear, our political beliefs, what we do for a living and so on. In my first two columns, I looked at how individual psychology feeds into the mechanisms of the Web to create and amplify polarized opinions, and how the situation at Abu Ghraib prison, like situations in thousands of other prisons worldwide, set up the preconditions that made abuse more likely, perhaps even predictable.

I hope that clears up the connection to atoms, for Anonymous or anyone else. Maybe this column should have been the first one, but then, that’s the way the human mind works – trial and error. And ideally, correction.

Thursday, May 3, 2007

The illusion of a nation divided

We seem to be a rather polarized country. According to views often expressed in the media, especially online, Republicans revel in the idea of torture and detest our Constitution, while Democrats want to bring the troops home from Iraq only to accomplish the dastardly double-trick of surrendering our country to the terrorists and kicking off a genocide in the Middle East.

It’s odd then that recent polls actually show 60 percent of Americans wanting to see the troops come home from Iraq either immediately or within a year. Another poll has 90 percent of Democrats, 80 percent of independents, and 60 percent of Republicans agreeing that global warming is a serious problem and that we as a nation should be a global leader in doing something about it. Studies show that when it comes to issues ranging from health care to the death penalty, from immigration to Social Security, people in the so-called red and blue states hold remarkably similar views.

Still, the illusion of a nation divided persists, and one reason that it does may be oddly mechanical. It’s quite possible that the emergence of visibly polarized views in the media, especially on the Internet, may be an almost automatic result of the relatively simple rules by which opinions and attitudes propagate through human heads.

Before I explain, I’d like to bring up what might seem to be a separate issue, racial segregation.

In the early 1970s, it was taken for granted among American academics that persistent racial segregation was due mostly to racism. Studies had revealed widespread racial bias in hiring, promotion and pay, and real estate practices that worked to keep blacks out of white neighborhoods. But Thomas Schelling, an economist then at Harvard, suggested that another factor might also be at work.

Schelling supposed that many people, even those who are perfectly happy to live in an integrated neighborhood, might also prefer not to live in one where they were part of an extreme minority. You wouldn’t think this simple preference could have much influence, but it can. Moving coins representing people around on a grid of squares representing houses, Schelling showed that the simple preference not to live in an extreme minority should generally lead people to move about in such a way that a community ends up segregated into distinct, racially segregated enclaves.

No one would come to this counterintuitive insight by sitting in an armchair, philosopher-like, and thinking about it. Schelling found it by way of an experiment, in his case with coins and paper, though today researchers can demonstrate the same effect using computers. The important conclusion isn’t that racism is unimportant; there is no question that it is. The point is that segregation by itself doesn’t imply racism; segregation might well arise quite automatically, the races separating like oil and water. Racism is real, but so is the automatic segregation effect, and we should be aware of both.

What does this have to do with our polarized political world? Just after the 2004 election, a pair of mathematicians undertook a study of how blogs link to one another. Imagine all the blogs on the Web as points on a page, colored red or blue depending on their political slant. Now draw a line between any two if one of them links to the other. Doing this, Lada Adamic and Natalie Glance found that the red and blue bloggers belonged to strongly separated communities, within which there were many links between like-minded sites. In contrast, there were very few links connecting the red community to the blue. (The original paper has a nice diagram of the two communities, showing the links within and between them.)

This intellectual segregation of the like-minded into separate enclaves persists today. And it is almost certain that it can be explained by the operation of a process like Schelling’s.

People experience real psychological discomfort – psychologists call it “cognitive dissonance” – when confronted with views that contradict their own. They can avoid the discomfort by ignoring contradictory views, and this alone brings like-minded bloggers together.

Humans share another psychological habit too – a strong tendency to adopt, even if unconsciously, the attitudes of those with whom they interact. We even copy other people’s behavior patterns.

In experiments at the University of Amsterdam, psychologist Ap Dijksterhuis had two separate groups of volunteers talk with some actors who behaved as either professors (supposedly intelligent) or soccer hooligans (supposedly not so smart). Afterwards, when the subjects were asked to answer a series of general knowledge questions, the people who had been “primed” by interaction with the “professors” did significantly better than those primed by the “hooligans.”

If just being around people who act “smart” or “stupid” can make us act similarly, goodness knows what automatic influences percolate through the more extreme regions of the blogosphere.

Our largely unconscious psychological mechanisms appear to support a kind of mechanical feedback that cannot easily lead to anything but a pronounced segregation into polarized enclaves, with attitudes and opinions amplifying themselves and reverberating within the confines of two distinct halls of mirrors. None of which, obviously, is conducive to healthy public discourse, nor supportive of any kind of reasoned and balanced consideration of issues.

The trouble is that these forces operate outside of our view. Many people, not by choice but more or less automatically, filter reality in an emotional way that preserves and supports the groups to which they feel linked.

The good news is, we’re not all bloggers – yet. And the American public still shows a more balanced range of opinion, much of it squarely in the middle, than one would guess from looking at the polarized blogosphere.

How people turn monstrous

It is four years and a few days since CBS News published the first photos documenting the systematic abuse, torture and humiliation of Iraqi prisoners at Abu Ghraib prison. The Bush administration and the American military have worked hard to firmly establish the “few bad apples” explanation of what happened. Eight low-ranking soldiers were convicted, and Staff Sargent Ivan Frederick II, who was found guilty of assault, conspiracy, dereliction of duty and maltreatment of detainees, is now halfway through his eight-year prison sentence.

But there are very good reasons to think that Frederick and the others, however despicable their actions, only did what many of us would have done if placed in the same situation, which puts their guilt in a questionable light. Can someone be guilty just for acting like most ordinary human beings?

In a famous experiment back in the 1970s, Philip Zimbardo and other psychologists at Stanford University put college students into a prison-like setting in the basement of the psychology department. Some of the students played prisoners and others guards, with uniforms, numbers, reflecting sunglasses and so on. The psychologists’ aim was to strip away the students’ individuality and see what the situation might produce on its own.

What happened was truly disconcerting — the guards grew increasingly abusive, and within 36 hours the first prisoner had an emotional breakdown, crying and screaming. The researchers had to stop the experiment after six days. Even normal kids who were professed pacifists were acting sadistically, taking pleasure in inflicting cruel punishments on people they knew to be completely blameless.

These were ordinary American college kids. They weren’t monsters, but began acting monstrously because of the situation they were in. What happened was more about social pattern, and its influence, than about the character of individuals.

Emeritus professor at Stanford, Zimbardo has argued in a recent book, “The Lucifer Effect,” that what happened in these experiments is also what happened at Abu Ghraib. As he points out, in lots of the photos the soldiers weren’t wearing their uniforms; they were anonymous guards who referred to the prisoners with dehumanizing labels such as “detainees” or “terrorists.” There was confusion about responsibility and little supervision of the prison at night.

The more the soldiers mistreated the prisoners, the more they saw them as less than human and even more worthy of that abuse. In both the experiments and at Abu Ghraib, most of the abuse took place on the night shift. In both cases, guards stripped prisoners naked to humiliate them and put bags over their heads. In both cases, the abuse involved the forced simulation of sexual behavior among the prisoners.

Frederick hooked up wires to hooded detainees, made them stand on boxes and told them they’d be electrocuted if they fell off. He stomped on prisoners hands and feet. He and others lined up prisoners against the wall, bags on their heads, and forced them to masturbate. His actions were indeed monstrous.

But when Zimbardo, as an expert witness, interviewed Frederick during his court-martial, these were his impressions:

He seemed very much to be a normal young American. His psych assessments revealed no sign of any pathology, no sadistic tendencies, and all his psych assessment scores are in the normal range, as is his intelligence. He had been a prison guard at a small minimal security prison where he performed for many years without incident. … there is nothing in his background, temperament, or disposition that could have been a facilitating factor for the abuses he committed at the Abu Ghraib Prison.

If someone chooses to commit an illegal act, freely, of their own will, then they are plainly guilty. Conversely, the same act performed by someone acting without free will, compromised by mental illness, perhaps, or the coercion of others, draws no blame. Far less clear is the proper moral attitude toward people who do illegal things in situations where the social context exerts powerful, though perhaps not completely irresistible, forces.

Can a person be guilty of a crime if almost everyone, except for a few heroic types, would have done the same thing? This is a question for legal theorists, and one likely to arise ever more frequently as modern psychology reveals just how much of our activity is determined not consciously, through free choice, but by forces in the social environment.

But the more immediate question is why those who set up the conditions that led to Abu Ghraib, or at least made it likely, haven’t also been held responsible. When Frederick arrived at Abu Ghraib, abusive practices, authorized from above, were already commonplace. Prisoners were being stripped, kept hooded and deprived of sleep, put in painful positions and threatened with dogs. On his first day there, Frederick recalled, he saw detainees “naked, handcuffed to their door, some wearing female underclothes.”

The conditions cited by Zimbardo, the situational recipe for moral disaster, were already in place.

The conclusion isn’t that Frederick and the others didn’t do anything wrong, or that they somehow had an excuse for their actions. They could and should have acted better, and Frederick has admitted his own guilt. “I was wrong about what I did,” he told the military judge, “and I shouldn’t have done it. I knew it was wrong at the time because I knew it was a form of abuse.”

But you and I cannot look at Frederick and the other guards as moral monsters, because none of us can know that we’d have acted differently. The evidence suggests that most of us wouldn’t have. The coercion of the social context was too powerful.

The second conclusion is that those really responsible for the abuse, on a deeper and more systematic level, still should be brought to justice. They’re in the upper tiers of the military chain of command and its civilian leadership; they’re in the White House.

Today, Frederick will wake up in prison, have his breakfast, take some exercise and face the daily monotony of prison life, something he can expect for the next 1300 days or so. He can be justifiably angry that those responsible for putting him in that setting at Abu Ghraib, where almost anyone would have done the same thing, are today walking around free.