Tuesday, July 31, 2007

The compartmented mind

I'm always surprised to find scientists whose work I respect having religious beliefs. It seems, from an essay by Carl Zimmer in today's New York Times, that mathematical biologist Martin Nowak of Harvard University may be one such scientist believer. As Zimmer quotes Nowak,

Like mathematics, many theological statements do not need scientific confirmation. Once you have the proof of Fermat’s Last Theorem, it’s not like we have to wait for the scientists to tell us if it’s right.


Nowak has done lots of great work on the logic of social behavior and cooperation in biology, not only among humans, but among may other organisms. It's brilliant work, and I've written about this field in the past. If you think we humans are really special because of our ability to cooperate, think again; single-celled bacteria manage many of the same tricks (which I think really brings home the point that some social phenomena have much more to do with the logic of collective organization than with the "specialness" of individual human beings).

Still, I find Nowak's comment on theology quite striking, and really hard to understand. (And obviously, I don't know the complete context in which he said it.) Does he believe that principles of religious belief can be established with the same kind of mathematical certainty as Fermat's last theorem, or any other mathematical theorem? His argument seems to have a devious logic.

Mathematical certainty is of a different order from anything requiring empirical confirmation; I believe in Pythagoras' Theorem in plane geometry because of mathematical proof, not empirical facts (that it works in practice merely shows how well plane geometry fits our world of ordinary experience). Quite reasonably, therefore, Nowak argues that scientists don't go demanding empirical evidence for mathematical assertions, the truth of which has been established by logic (if one already accepts the beginning assumptions). But then comes the punchline -- so why should scientists demand evidence for religious assertations?

Well the problem, obviously, is that religious beliefs aren't established by any appeal to logic, but are taken on faith. They aren't like mathematics, and so the comparison is just irrelevant. It seems as if Nowak is implying that some theological statements can be proven by pure logic, but which ones? I'd love to see some examples.

All this brings to mind a fascinating novel I just read, A Certain Ambiguity, by Gaurav Suri and Hartosh Singh Bal (I have a short review of it forthcoming in New Scientist). In the novel, an Indian mathematician Vijay Sanhi gets arrested in New Jersey under archaic laws against "blasphemy," when he publically doubts the truths of Christianity. He must defend himself before a Judge who is himself a committed Christian. Sahni tries to argue to the judge that religious certainty never approaches the same level as mathematical certainty, and illustrates the nature of mathematical reasoning by showing how geometric truths (such as the Pythagorean theorem) can be established from a handful of "self evident" axioms and pure logic.

To this demonstration, the Judge quickly responds that the same is true in Christianity:

I start with the "simple fact" that everything in the Universe must be created by something; it cannot come into being out of nothing. ... ; hence, the first something must have been created by a being that was always there, a being that could create something from nothing, and a being that lived before time started. Humans have just given that being a name; we call Him God.


To which Sanhi replies:

The problem is I do not believe your underlying "simple fact" that everything in the universe must have been created by something... Everywhere I look, life is full of cycles.. Or I could equally well claim that the "first thing" always existed... You assumption that everything must have a cause of creation applies to the world of sense experience. In this realm I will agree with you that your "simple fact" of causality is self-evident. But you have extended the "simple fact" beyond the world of sense experience to something that is supposed to transcend it. You have applied an experiential notion beyond all possible experience, as well as beyond the limits for which there are any guarantees that our sensory perceptions are reliable. So, yes, Judge, I disbelieve your underlying simple fact, and hence I disbelieve your conclusion that God exists.


Curiously, the Zimmer article also quotes another well-known scientist believer, biologist Francis Collins, director of the National Human Genome Research Institute. But Collins and Nowak seem to disagree with one another. While Collins claims that "Selfless altruism presents a major challenge for the evolutionist," a point I've argued against in The Social Atom, and in this blog, and elsewhere, Nowak seems to think there's no big problem. “The current theory," he says, "can certainly explain a population where some people act extremely altruistically."

But so what. People disagree, and people are all over the board on this religion versus science argument, and what it comes down to, it seems to me, is a point made very well by Sam Harris in The End of Faith -- that those motivated by religion and faith seem to have marked off a special corner of life in which the ordinary laws of thinking and evidence just aren't supposed to apply. For those persuaded by religious faith, the world seems to fall into two disjoint parts -- one in which the search for evidence and application of reason is the only known method for gaining sure knowledge, and the other in which the need for evidence is wilfully suspended:

Tell a devout Christian that his wife is cheating on him, or that frozen yogurt can make a man invisible, and he is likely to require as much evidence as anyone else, and to be persuaded only to the extent that you give it. Tell him that the book he keeps by his bed was written by an invisible deity who will punish him with fire for eternity if he fails to accept its every incredible claim, and he seems to require no evidence whatsoever.

Monday, July 30, 2007

bin Loggin'

No real time to post today as I'm past deadline and trying to fathom and write about a long paper on the "hybrid analysis" of genetic regulatory networks. Don't worry - -I won't say another word on the topic.

But...for amusement, have a look at George Mason economist Bryan Caplan's grim reckoning of the political process and why we idiot citizens get what we deserve, as summarized in today's New York Times opinion piece of Nicolas Kristof. The gist:

This book, by Bryan Caplan, an economist at George Mason University, does a remarkably thorough job of insulting the American voter. The cover portrays the electorate as a flock of sheep.

“Democracies frequently adopt and maintain policies harmful for most people,” Professor Caplan notes. There are various explanations for this — the power of special interests, public ignorance of details, and so on. But Mr. Caplan argues that those accounts fall short.

“This book develops an alternative story of how democracy fails,” he writes. “The central idea is that voters are worse than ignorant; they are, in a word, irrational — and vote accordingly.”


I haven't yet read Caplan's book, The Myth of the Rational Voter: Why Democracies Choose Bad Policies, but I'm going to order a copy as he seems to explore fertile territory -- how our biological "thinking instincts" tend to lead us into bad decisions, whether because we weigh up risks incorrectly, or vote for people because they seem strong and "presidential". In their short review of the book, Publisher's Weekly says the book explores "how social science's 'misguided insistence that every model be a 'story without fools'" has led it to a very poor understanding of human systems. I can certainly agree with that.

Maybe nothing brings the point home more clearly, as Tim Grieve at Salon.com notes, than that 41 percent of Americans still think Saddam Hussein helped pull off 9/11, or that 11 percent actually think we've caught bin Laden (or "bin Loggin'", as a man in Pennsylvania apparently pronounced the name to Grieve).

Friday, July 27, 2007

Politicians and quantum "flip-flops"

One update (below)

As the campaign for the U.S. Presidential election gains momentum, we're once again subjected to the irritating spectacle of candidates responding to simple questions with paragraphs of vacuous waffle, and mouthing totally contradictory opinions to different groups. It's tempting, and maybe partially correct, to suspect they're all just slippery and mealy-mouthed weasels. But here's another idea: maybe the way they act has its origins in a deeper problem inherent in the mathematics of democracy and public opinion. There's good reason to believe that a candidate can appeal to more people by holding several contradictory views all at once.

The French philosopher, the Marquis de Condorcet, once proved a truly surprising theorem about public opinion. As individuals, people tend to be logically consistent in their opinions or preferences. If I prefer Obama over Clinton, and Clinton over Romney, then I'll also prefer Obama over Romney. If A outranks B, and B outranks C, then A should outrank C. Most of our thinking conforms to this basic rule of logic, at least most of the time.

But just because individual preferences follow such logic, this doesn't imply that groups do as well. Within a population of people, Condorcet proved, it is entirely possible for a majority to prefer A over B, a majority to prefer B over C, and a majority also to prefer C over A - making a cycle of collective preference, so that's its impossible to say which of A, B or C the people really prefer. (For example, take three voters who, respectively, rank three policy alternatives in the following orders: A > B > C, C > A > B, and B > C > A. It is easy to see that two out of three will prefer A over B, B over C, and yet also C over A.)

Nearly a decade ago, physicist David Meyer of the University of California at San Diego, and political scientist Thad Brown of the University of Missouri, suggested that this peculiar logic of popular opinion might well have something to do with the slippery views of politicians. Think of a candidate who just tries to mouth opinions that match up as closely as possible with the majority view. Quite possibly those opinions will necessarily involve some contradictions and apparently irrational cycles of preference. If so, then candidates and their advisors face a real choice between a strategy that respects ordinary logic, with the candidate promoting the kinds of consistent preferences than an individual might hold, and another in which the candidate abandons the need for consistency and uses more flexible and slippery tactics to appeal to as many voters as possible.

I'm guessing that almost all candidates follow the latter strategy, and those who don't tend to fall out of the polls very quickly. I recall being excited some years ago at the early Presidential candidacy of Governor of Arizona Bruce Babbit, who seemed to speak entirely sensibly about the importance of things like education, public health, investment in the nation's infrastructure, etc. He was so sensible and intelligent that his polling number quickly fell into the single digits and he dropped out of the race in about three weeks.

I suspect that experienced politicians may get drawn into adopting strategic contradictions more or less unconsciously, as they study polls and the results of focus groups and try to say things that put them in touch with the voters. "This," as Meyer and Brown put it, "is why it’s so hard for us as voters to discern exactly for what they stand."

The naive view of political strategy might hold that if most peoples' views fall into three sets, A, B, and C, which you might visualize in some "policy space," then the politician does best by "triangulating" -- setting out a position that falls at the center of those views. This way he or she appeals at least a little to everyone. But if cycles exist among the voting public, then a better bet may be to occupy not one point in policy space, but to spread out over a region and, roughly speaking, be as inconsistent as the voters.

Maybe there's a kind of perverse quantum logic to flip-flopping: just as quantum particles don't exist at any one point, but act as quantum waves and spread out through space, politicians defy the logic of individual thinking, and benefit by having fuzzy and ill-defined views.

Update

Just after I posted this, I remembered another interesting point relevant to this problem of Condorcet cycles in public opinion. The importance of the matter depends, of course, on the actual likelihood that public opinion really does show such cycles. Condorcet proved that it can. But in reality, does this happen very often?

Some physicists, Matteo Marsili and Giacomo Rafaelli, looked into this a few years ago. Their idea was to assume that individuals in a big population rank a series of alternatives, A, B, C, ... and so on in a random order. Each person has his or her own ranking, determined at random. Then they asked -- what is the chance, mathematically, that you'll find cycles in the group preferences when these people vote on the various alternatives in pairs? They vote on A versus B, A versus C, B versus C and so on. How likely is it that their voting shows a cycle? Their results showed that as the number of alternatives gets reasonable large, the chance of not having cycles dwindles rapidly to zero.

So if peoples' preferences were random, cycles are almost a guarantee. Of course, preferences aren't random, but treating them as such is at least crude approximation. And it suggests that the trouble presented by cycles, and the waffling strategies they make effective, may indeed be real.

On caveat, however -- Marsili and Rafaelli also showed that the tendency of people to conform with the views of others may play a powerful role in preventing Condorcet cycles and making the public's collective views more "rational" than they would otherwise be.

Thursday, July 26, 2007

Fat spreads

Just about any kind of human behavior or characteristic can spread between people more or less mechanically, like a virus. This is hardly news anymore; it's been found in most crime, in decisions to buy cellphones or have children, in suicides, you name it. Now a new study reported in the New York Times demonstrates the case for obesity as well. Fat spreads. (A possible mechanism, the researchers suggest, is that people with obese friends revise their views on what is an acceptable body type.)

The NYT article touches on an extremely delicate topic. If obesity is a serious health problem, akin to a disease, then it looks as if it's a disease that spreads. So should people drop their fat friends? That's clearly a reprehensible take on the matter; one could easily reverse the perspective and say that thinness and health spreads too, so that befriending the fat is an act for public health. Not to mention that lots of recent psychology shows that we judge our well being in relative terms, comparing ourselves to those around us. So having fatter friends is probably good for our self esteem, and that surely feeds into health as well.

But I'm betting that more than a few people will draw some weird and anti-social conclusions from this...

Curiously, and it seems accidently, Dick Cavett also talks about obesity in an opinion piece, also in the New York Times. He makes a very good point that advertisers seem to have taken note of the rapidly increasing size of the American person, and put obese people all over television:

This disturbs me in ways I haven’t fully figured out, and in a few that I have. The obese man on the orange bench, the fat pharmacist in the drug store commercial and all of the other heavily larded folks being used to sell products distresses me. Mostly because the message in all this is that its O.K. to be fat.


Of course, advertising isn't the primary cause of U.S. obesity, which seems clearly linked to changes in food technology that have made high-calorie junk food readily available in a convenient format and extremely cheap. Oh yeah, and government subsidies for snack foods and soft drinks, manufactured out of corn syrup and soybean oil. I saw a study recently, don't recall where, showing that junk food in pure calorie-to-cost ratio is the cheapest food there is in the typical supermarket; so the incentive is there for anyone with limited funds to buy it.

A sensible government would take steps to change matter. Will our government be sensible? Don't hold your breath.

Wednesday, July 25, 2007

Comments anyone?

Idiot me...I just realized today, inspired by a helpful email from one reader (thanks Todd), that I've inadvertently been hindering people from commenting on my posts. I was not accepting comments from "anonymous" and, as I didn't realize, that meant only those with Blogger accounts were able to comment. I had assumed anyone could.

I've changed the setting now so anyone can comment...

Deliberation difficulties

Getting back to group polarization and the problems of deliberation. Following on from the original essay in my New York Times column, I posted again on the topic a few weeks ago, exploring recent experiments by Cass Sunstein of the University of Chicago and colleagues on the phenomenon of group polarization. They found -- and a wealth of other evidence also supports this view -- that when people are brought together into discussion, the group often ends up coming to a consensus that is more extreme than the original views of the people making it up. You can read this paper here; it's due out soon on the California Law Review.

In an earlier paper in the Yale Law Journal, Deliberative Trouble? Why Groups Go to Extremes, Sunstein mentions, for example, a classic study in the 1960s that found what is called "the risky shift." The experiments had graduate students in management respond to questions involving attitudes toward risk. After answering, the students then engaged in open deliberation on the questions. The experiment found that the students in their deliberations almost always moved toward expressing more aggressive attitudes towards taking risks, stronger than they had privately.

By the way, if you have some time, read the Sunstein paper which talks about a number of similar examples. It's both illuminating and a great deal of fun to read.

But there is obviously a lot more to what happens in deliberation that a tendency toward polarization. The Sunstein et al. experiments show that groups of people who are relatively similar in their views to begin with -- say, a group of Democrats, or a group of Republicans -- after deliberation, come to a greater consensus (as well as becoming more extreme and, one might say, convinced in the rightness of their views). But what if people aren't similar? What if Democrats and Republicans try to discuss the issues together?

In a comment to that post, John Savage wrote that...

I’m unsure why you cite the Sunstein study as a test of the hypothesis that “deliberation days” would help to break down differences, when the experiment actually involved intentionally segregating liberals and conservatives, with the unremarkable result that both groups became more extreme.

At the end, you come to an optimistic conclusion from a centrist point of view: that somehow bringing liberals and conservatives together in a common forum would quickly break down their differences. As a political blogger, I just find this intuitively wrong.... two groups of “extremists” on opposite sides would probably not moderate each other’s views by talking to each other, no matter how many “links” you tried to provide between them.


Indeed, I didn't mean to imply that the Sunstein et al. study supports the idea that deliberation days would break down differences. What it offers is more of a cautionary lesson -- that if you're going to have deliberation days, you'd better try to get lots of diversity in your groups, because if you don't, you may well make the polarization even bigger. Logically speaking, the study doesn't say *anything* about what you're likely to see in groups where you *do* have lots of diversity and polarization. It just doesn't address that situation at all.

Since then I've been wondering about this. Cass in an email offered some views on what tends to happen in groups that do have great diversity, say both staunch Republicans and Democrats. As we all know from wandering in the poltical regions of the blogosphere, you tend to find persisting if not heightened polarization. Cass suggests that this tends to be the outcome whenever people clearly recognize and identify themselves with some group, such as a political party. The "identity differences" make it difficult for either group really to listen and take the points of the other side seriously.

I've come across some other fascinating work that also touches on this point. For example, in a nice paper entitled Modeling Cultural Cognition, Dan M. Kahan, Donald Braman, and James Grimmelmann (of Yale, George Washington and New York Law Schools respectively) argue that people who strongly hold different opinions -- on an issue such as gun control, for example -- often do so largely for powerful cultural reasons that make them relatively impervious to persuasion. As they put it, "culture is prior to facts in individual cognition."

The idea is that for a number of fairly basic psychological mechanisms, people tend to hold the same beliefs as most people in their own cultural group (Republicans, economists, academic physicists, etc.). And these views do not easily get swayed in the light of new evidence. Rather, "the beliefs so formed operate as an evidentiary filter, inducing individuals to dismiss any contrary evidence as unreliable, particularly when that evidence is proffered by individuals of an opposing cultural affiliation."

Now I think we can all relate to that observation. I can present all the new climate studies I want, published in Nature, Science, wherever, to my determined climate-skeptic friends, and even if they listen politely (rolling their eyes occasionally), I know they just don't accept it. It doesn't sink in as new, legitimate information. And from my side, no new "study" on climate change written up by anyone from the American Enterprise Institute is likely to change my mind (see, I couldn't even help but put quotes around the word "study"!).

A more important question, of course, is how it might be possible to design strategies to get past these entrenched cultural differences that make it virtually impossible for people to come to agreement, even in the context of full information. You need to devise strategies for deliberation so that people can take on new information without having their cultural certainty threatened (at least not too rapidly). Seems like a tall order.

But I did hear recently of some researchers in Belgium, coming out of ecology, I think, who were devising just this sort of strategy, and using it, apparently effectively, in practice, bringing together business people with environmental activists to work together on land use issues, for example. I'll post on this soon if I can find out more about it. What I remember of the idea is that they bring the polarized parties together and first have them discuss all those things on which they AGREE, which usually are quite a lot. They establish common ground, and at least a little bit of trust, and then gradually take small steps away from that.

Monday, July 23, 2007

Irritating details

The advantage of science over ideology is that science (when it's done well) aims to learn and accept the facts, however uncomfortable they may be. You may want the universe to be infinite, or finite, finite with sharp edges or finite but unbounded (like the surface of a sphere), or something else entirely different, but ultimately your desires won't change anything. Really good scientists have the ability to question their own preconceptions, and adjust their beliefs in the light of new facts. As Nietzsche once said, it's not so hard to have the courage of one's convictions; much more difficult is to have the courage to question one's own convictions.

I noticed today that the NYT's David Brooks cited a recent paper from MIT economists Xavier Gabaix and Augustin Landier on the matter of CEO pay in the U.S. However much I find Brooks' views frequently irritating, and while I strongly suspect he cites this work because it supports his natural affinity for hard-boiled capitalism, this particular citation does seems wholly appropriate -- for Gabaix's work, at least what I know of it, is based strongly on careful science.

What the paper claims is that the six-fold rise in CEO pay over the past two decades cannot easily be attributed to a culture of runaway greed or any other insidious evolution of corporate management. Rather, it seems that relatively straightforward economic factors -- essentially the competition between firms for good decision makers -- may well explain the rise. Over the same two decades, the authors point out, the size (capitalization) of the largest U.S. firms has also gone up by a factor of six. CEO's make so much more now, the argument goes, not because of stock options, or managerial "skimming" of corporate profits, but because their actions influence enormously larger sums of money -- and small but real differences in their abilities count for a lot.

This is very much traditional "neoclassical" economics, but their argument, in more detail, seems strongly plausible. They assume that firms compete for CEO talent, and that talent naturally becomes increasingly important in larger firms with more assets at stake. Mathematically, this leads to the prediction that larger firms should have more highly paid executives, which is empirically true. (CEO pay rises in a regular way as firm size (capitalization, earnings, number of employees) raised to the 1/3rd power.) It also predicts that CEO pay on average across the largest firms should rise in direct proportion to the average size of those firms -- not actually a startling conclusion, when you begin thinking this way.

But there's one other interesting thing in this work. Gabaix and Landier use their theory, and the data on compensation, to make an estimate on the distribution of talent among CEOs. What they find is that over the top CEOs, say the top 1,000, the variation in talent is almost zero. Taking the very best CEO and replacing him or her with the 1,000th best, would tend to decrease a firm's earnings by only 0.04% -- less than one part in 1,000. BUT -- because the largest firms are so large, and have so many assets, this tiny change can have major monetary consequences. So it makes sense, from a strictly economic point of view, for CEO number 1 to make a lot more than CEO number 1,000 even though their skills are essentially equal.

Of course, none of this says anything about the kinds of effects that gross economic inequality, stirred up by such enormous pay differences, may have on the larger community. That's another story. But what is clearly so nice about this work, at least in my opinion, is that it offers such a natural and non-conspiratorial explanation by making a direct appeal to real data. We could do with more of that.

Monday, July 16, 2007

What we're not hearing

I almost cannot believe what I just read in today's New York Times Opinion piece by David Brooks. Sheer craziness. Apparently, says Brooks, the media is going out of its way not to report on matters that would cast the President in a good light:

Bush is not blind to the realities in Iraq. After all, he lives through the events we’re not supposed to report on: the trips to Walter Reed, the hours and hours spent weeping with or being rebuffed by the families of the dead.


If I understand correctly, we're supposed to believe that George Bush is making lots of trips to Walter Reed to visit wounded soldiers, and spending hours and hours of his precious time weeping with families of dead soldiers, and the White House (or someone else) just doesn't want this to be reported because...it would show...I don't know, that Bush actually cares? That would be so bad for his image.

Come on.

Then again, Brooks is clearly prone to delusions. He ends the column envisioning an epic philosophical confrontation between our wise George and the great Russian novelist Leo Tolstoy over the proper interpretation of history and the forces that shape it. Who knew that George Bush was so profound?

Ethics versus reality

This profile of Harvard Professor Howard Gardner in the business magazine Strategy+Business is inspiring and a little depressing at the same time. Here's a man who is immensely well motivated and clearly a force for the good; for decades he's been writing books and talking to corporate leaders, trying to bring ethics into the corporate world. Yet he's clearly been encountering a deeply held view that sees ethical behavior as an "expensive luxury." The received wisdom glorifies the business importance of "hard-nosed" decision-makers who focus only on "the bottom line," and who know that greed is ultimately good.

Gardner argues that this is an illusion and that ethical behavior is just as important, perhaps more important. Why should business aspire to anything more than maximizing shareholder returns? The article expresses Gardner's views as follows:

The very survival of business may depend on a more widespread benefit. Just as the church of latter-day Europe lost its influence when the mass of society began to doubt its relevance, so too could corporate enterprises be rejected by the body politic — consumers, employees, and even shareholders — if they fail to generate wealth for more than a privileged few. If given a choice, he believes, knowledge workers will flock to companies that embrace high standards of excellence and that allow them to feel engaged with society, leaving other firms with the less talented, less motivated members of the workforce.

Many of today’s young entrepreneurs... know that a company culture of ethics, engagement, and excellence is more likely to nurture the innovation that markets reward. Thus, the companies that win in the marketplace will be those that enable good work: work that is well executed, contributes to society, and is personally enjoyable to perform. For when ethics atrophy, so does the ability to execute and lead. And when ethics are well developed, people have an inner gyroscope they can rely on for guidance as they confront the complexities of the business environment.


I do think he has a good point, and the idea of improving business ethics as a way of doing better business has been emphasized by many management theorists in the years after the last wave of corporate scandals at Enron, Arthur Anderson, World.com, etc. There's some good evidence that current education in economics and business does make people more likely to be greedy and to act unethically. But there's maybe also a little wishful thinking here, and especially a lack of attention to the heterogeneity of human kind. Sadly, some people really are greedy in a deep way, and they're not likely to be changed by further education or being made to voice oaths of ethical behavior. Moreover, what we know of the dynamics of collective cooperation suggests that only a few such people can have a corrosive influence on an entire group.

One of my favorite experiments exploring human cooperation involves so-called "public goods" games. The idea is simple. You have, say, ten volunteers in the experiment who start out with $10 each. The game proceeds through a series of rounds, in each of which each individual decides how much they'd like to contribute to a "public fund." The reason you might contribute is that you might get something back from it. The experimenter makes it clear that, after everyone has made their contribution, he or she will take the total in the fund, multiply it by two, and then distribute it back to everyone equally.

So, if everyone contributes their full $10, you'll have $100 in the middle. The experimenter will double that to $200, and then return $20 to each person, giving each a profit of $10 (the $20 minus the $10 they put in). Everybody wins! But there's trouble lurking, because each individual might be able to make even more by cheating. Suppose, for example, that everyone but me contributes $10, and I put in nothing. The total will then be $90, yielding a double of $180, with $18 being then returned to each person -- including me. Now I've made a profit of $18 rather than just $10. (The name "public goods" comes in because the experiment mimics the situation we face collectively in trying to build roads, support an army etc. If we all pay our taxes, we can all have excellent schools and good roads; but if I can avoid paying taxes, I can still enoy those things, and more money too.)

What happens in real experiments (such as those conducted a few years ago by economists Ernst Fehr and Simon Gachter) is fascinating. In the first round, almost everyone does contribute a large share of their money. Instinctively, we're prepared to be generous and to do our part in helping the group. But the cooperation doesn't last. Not everyone is so cooperatively minded, and in round one a few people do cheat. In doing so, they set the seeds for the demise of the entire group. For in the second round, some of those who gave generously in the first round, seeing that some others did not, refuse to be cheated again. They don't contribute as much. As the game continues, more and more of participants see others cheating and adopt the same behavior -- after all, who wants to by one of the few supporting the cheating many? After about ten rounds, no one any longer contributes anything.

What the experiment implies is that cooperation in such a scenario is not stable. It won't last, even though most people are cooperative and ethical, because some people cheat. The direct consequence of their existence (at least in the very simple setting of the public goods experiment) is a classic tragedy of the commons, doubly tragic because the cooperative get dragged down to the lowest level by the actions of the greedy.

This kind of experiment explores something akin to the basic physics of collective cooperation. And while it's obviously immeasurably simpler than any business organization, it also shares some important elements. A business is an enterprise of many people who, by working together, can gain more than if they all were working apart. It's successful functioning requires that most people contribute their efforts honestly, without shirking or trying to cheat their way to an unequal share. Unfortunately, the actions of the greedy is precisely what orthodox economics has been counseling (at least implictly) for many years.

In Ultimatum Game experiments with graduate students from different disciplines, for example, economist Robert Frank and colleagues found that graduate students in economics were markedly less likely to act in a cooperative manner than students in other disciplines. (For example, see the paper Do Economists Make Bad Citizens? available here.) The researchers explanation is that the students had absorbed the "self interest" model of human behavior in their education, and take it as a guide for how other people will behave. They act more greedily because they expect others to be greedy.

Getting back to Gardner, then, what he says makes sense, but I don't think goes far enough. He's right to attack the current style of education in business and economics, which doesn't adequately stress the cooperative aspects of business success. Improving education could give us more humane and rounded business leaders, yet I don't think we can rely only on better education alone. Indeed, some recent experiments in psychology suggest that about 2% of the population act more or less like sociopaths, with no feeling whatsoever for others (I can't now recall where I saw this, but I'm seeking the link). No amount of education, I suspect, will weed out this 2% who feel no compunction about cheating and stand ready to kick off the inexorable slide toward collective non-cooperation.

Of course, this is in large part precisely why we have laws and regulations, both within companies and in society, to control those not adequately constrained by internalized social norms. Indeed, Fehr and Gachter, in their experiments, explored variations in which the participants, after each round, could pay a small fee in order to punish anyone who they thought hadn't contributed adequately (the punishment was a fine against that person). Although orthodox economics would say that no one would ever choose to act this way (paying out personally, to punish someone else, while getting nothing directly in return), many people were in fact willing to do so. Consequently, cheating became more costly and those tempted to cheat ceased doing so. With the possibility of punishment in place, the group's level of cooperation remained high indefinitely.

Obviously it is difficult to draw any conclusions about the real world from such simple experiments. But they stress how useful and effective the mechanism of transparency (each person being able to see how much others are contributing) can be, especially when coupled with means for punishment of those who transgress social norms. Howard Gardner may achieve a great deal with education, and with calls for a great commitment to ethical behavior, but the realist in me says that nothing will really change unless those who cheat -- and there will always be such people -- have a high chance of being caught and punished.

Friday, July 13, 2007

The big lie... the old technique

I have no time to post today (as I need to meet deadlines for writing some things for which I'll actually get paid), but I just can't resist...

AJ at Americablog discusses what seems to be a directed effort to repeat the Iraq-invasion-era misguiding of the American public, no doubt to offer some political cover for the beleaguered administration. Every day we hear the same mantra from the White House, that we face the epic battle for the future of Western civilization in confrontation with Al-Qaeda in Iraq (usefully conflated with Al Qaeda more generally), who will follow the troops home if they aren't first defeated in Iraq, and who therefore represent the principal threat there. As AJ writes,

This is, quite simply, completely and totally false.

Anyone who claims that the so-called al Qaeda in Iraq group is the "principal threat" to anything in that nation -- whether its citizens, the government, the political process, or any specific ethnic or sectarian group -- is either grossly ignorant of the realities of the Iraq war or blatantly lying. I honestly have no idea which it is in this case, though it's worth noting that the chief U.S. military spokesman, Brig. Gen. Kevin Bergner, was employed as a Special Assistant to the President prior to his current appointment.

Most reliable estimates put the fundamentalist/jihadist/al Qaeda actors in Iraq at around 3-5% of the total insurgency, with virtually no approximations exceeding 10%. I really cannot overstate how misleading it is to focus on al Qaeda when the driving forces of the conflict are average, native, very pissed-off -- but not religious fundamentalist -- Iraqis. The vast majority of the Sunni population is relatively secular (more secular, in fact, than Iraqi Shia), and even tacit support of jihadists is founded in anti-American sentiment. Even the sectarian violence is fueled more by localized conflicts between Sunni and Shia families, tribes, and militias than by al Qaeda.

It is true that AQI groups commit the most spectacular attacks, including the vast majority of suicide bombings, but if the underlying problems were solved, or even addressed (including, but not limited to, oil revenue sharing, federalism, de-Ba'athification, provincial elections, etc.), AQI would lose most of its ability to operate because it would have no support on the ground.


As Ian Welch points out at The Agonist, what's most disturbing is the stark effectiveness of this old technique of telling the Big Lie often enough until most people believe it:

The administration just keeps saying it, and saying it and saying it and the media, including the print media, just repeat it. Which, I'll point out, is propaganda rule #1. I'd love to see a poll showing how many Americans think AQ is the primary enemy in Iraq - I'd be quite surprised if it isn't a majority.

This is why decision making in the US is so broken, because it's based on lies and those lies are established through, honestly, no exageration, classic Big Lie propaganda techniques right out of a 1930's handbook.


Some of you who visit The Social Atom now and then may recall the post of a few days ago on recent psychological experients on how people form opinions about what the majority thinks. The clear conclusion is that a few voices, repeating the same claims persistently and loudly, will make their way into our brains and have a singificant influence on our thinking. This isn't rocket science, as they say, but the simple dynamics of the human brain. Unfortunately, the White House understands this all too well, while our media seems bizarrely naive and unprepared to recognize the way they're being played -- to our immense collective cost.

Thursday, July 12, 2007

Coincidences... -- Mark Buchanan

A few years ago, I experienced a very strange coincidence. I emailed a physicist at the University of Chicago (I can't recall who now, maybe Tom Witten) with a couple questions on some of his work. He wrote back with some answers, and then put a P.S. in the email: "Say hello to Haim Diamant."

Now I'd never heard of Haim Diamant, so I wrote back saying, "Thanks for your response, but who is Haim Diamant?"

In a few hours I got another email: "Is it really possible," he wrote, "that there could be two Mark Buchanan's, both physicists and both living in France?"

As it turns out, there were, me, and Mark Buchanan, a physicist who had formerly worked in Chicago (now at the University of Oslo), and who emailed me the following day. It's a very odd experience getting an email from someone with your exact name. Anyway, I've followed Mark's work over the past few years; curiously, we not only share the name but a deep interest in the same kind of physics, the physics of liquids and solids, pattern formation, the spontaneous emergence of collective organization, and so on. Strange.

I mention this because I just had another weird coincidence today, getting an email from one Mark Earls who has written a book quite similar to my The Social Atom. His book is called Herd, and, while I have't read it yet, it looks very interesting. I just visited his blog, and noticed one very interesting comment on a cognitive bias that most of us have...

"we find it really much easier to respond to individual others than to the confusion of the group. It's easier to think of the face in the crowd than the crowd. Now there's a psychology insight that might be useful... "

Indeed, Nicolas Kristof wrote a very powerful essay in the New York Times on just this issue in May. He cited some experiments that show people respond with more generosity to a photo of one starving child, with a face and name, that to a verbal plea for aid for starving millions. A brief excerpt:

"In one experiment, psychologists asked ordinary citizens to contribute $5 to alleviate hunger abroad. In one version, the money would go to a particular girl, Rokia, a 7-year-old in Mali; in another, to 21 million hungry Africans; in a third, to Rokia -- but she was presented as a victim of a larger tapestry of global hunger.

Not surprisingly, people were less likely to give to anonymous millions than to Rokia. But they were also less willing to give in the third scenario, in which Rokia's suffering was presented as part of a broader pattern."

I'm not sure what evolutionary psychologists would say, but it seems reasonable to speculate that we're just not evolved to think properly of hundreds or millions of people. Our ancestors spent their lives overwhelmingly in groups of at most 25 to 30 people, and usually in sub-groups of far fewer, and they evolved to make good decisions in those contexts. Nothing in evolution has prepared us for the massive collectivity we experience today.

The expertise of the ignorant -- Mark Buchanan

I was just about to post something on the science of climate change -- and on how people form their opinions about it -- when I happened upon several must-read posts on a similar theme. Chris Mooney has two pieces (here and here) in the Huffington Post exploring how public opinion has "tipped" in recent years, bringing the urgency of climate change much more into mainstream thinking. And there's a wonderful post by Orac at Respectful Insolence looking at the bad arguments that get repeated interminably by climate change "skeptics" -- how can there be warming if it's so cold today?, etc. If Mooney's evidence is correct, then these skeptic red-herrings aren't standing up so well as they used to, which is a good thing.

Personally, I'd been stirred by Camille Paglia, responding to her readers. Paglia often says insightful things on any number of topics, but she seems to go off the rails when it comes to climate change. I'm not sure why. One reader, a self-proclaimed expert in nuclear design and atmospheric modeling took issue with Al Gore and the "hysteria" over climate change, and Paglia loved every word of it:

"Bravo for your invigorating deconstruction of current propaganda! I too am very concerned about the potential damage to Democrat credibility coming from the grab-bag Gore crusade, with its wild exaggerations and hypocritical sanctimony. It does make liberals look like ditzes -- the last thing the party needs in a presidential campaign where no-crap national security issues will be paramount. Environmentalism is of vital importance to our future, but it cannot be based on lies."

It's mysterious to me why concern over the Earth's future climate would make liberals "look like ditzes." Somehow it seems more likely that those who continue to deny any link between human activity and global warming, in the face of the consensus opinion of the worlds' scientists -- who've actually spent time studying the problem in great detail -- would be more deserving of ridicule.

But the skeptics clearly don't accept the idea that climate scientists might know more than the average citizen, or that their models deserve more consideration and weight than off-the-cuff observations of about local cold spells and the like. It's a peculiar view, and it raises an interesting point.

How do we make up our individual minds about any scientific issue, including climate change? Let's be honest. On almost any scientific issue, none of us has really investigated all (or even a little bit) of the evidence to the point of becoming an expert. Rather, we form our opinions by learning a limited amount, and by weighing the words and arguments and reputations of others. We're finite beings in a hugely complex world, and we really have no other choice but to learn (or attempt to learn) from others.

I've never performed careful experiments to test the principles of relativity, but I believe in it because I've worked through the logic of the mathematics, which has a beautiful coherence, and I've read the papers (or textbook discussions) documenting the important experiments that support the theory. And I've experienced the workings of the scientific process, which makes it in the interests of many others to point out any errors or mistakes in such experiments.

Suppose I were to ask Camille Paglia what she thinks about the prospects for, say, spintronics to be a commercially viable technology in the next five years (the idea is to design devices that would use the spins of electrons, not only their charges, to carry information). My guess is that she hasn't studied the latest papers in Nature or the Journal of Applied Physics describing key advances in the technology, and the hurdles that remain, and so to come to an answer she'd consult some experts. She'd call up physicists involved in the area, a range of theorists and experimentalists, and try to form some consensus of their views. She might try to read some review papers by those experts, and so on. This would be a sensible approach for anyone -- talk to some of the people who know.

I imagine that Paglia, or any other intelligent person, would use the same strategy in trying to come to an informed and accurate opinion on just about any scientific matter, ranging from the causes of deep earthquakes to the puzzle of high-temperature superconductivity. She knows that on these matters she doesn't know anything, but that there are people who've devoted their intelligence and energy for decades to building an understanding.

But then when it comes to climate change, suddenly everything changes....the scientists can't be trusted. Paglia and other skeptics don't look to the views of the IPCC, or to the vast majority of other climate scientists who express similar views (or see far more extreme outcomes as likely), but instead accept the slogans of those with obvious political interests, or the views of a very tiny minority of scientific skeptics.

Why is that? Frankly, I don't have an answer. But unfortunately, the very complexity of the climate system adds to the problem. It's easy for skeptics to make up some story about how its all the sun causing the warming, or how today's warming isn't anything special relative to the past, and so on. They offer up simple stories and explanations that stick in the mind. Meanwhile, it's not so easy for climate scientists to explain the results of computational models that for accuracy have to include a multitude of nonlinear feedbacks, and in many ways have become as complex and difficult to understand as the climate itself. There are no simple stories to tell. But that's the situation we're in, and this is the best science we currently have.

Richard Feynman had a great definition of science. "Science," he said, "is belief in the ignorance of experts." He was implying that science is not about accepting claims based on faith, or because they're proclaimed by some famous and esteemed professor, but because they're supported by evidence. Experts ought to be questioned and challenged, and they are, everyday, in the ordinary workings of science.

But belief in the ignorance of experts isn't the same as belief in the expertise of the ignorant, which is what the skeptics often seem to embrace.

Wednesday, July 11, 2007

Mechanical man?

I have a new article out in New Scientist, in case anyone might be interested. It explores the idea that perhaps we humans aren't the uniquely conscious and rational animals we think we are, but act on a much more instinctual and mechanical basis, our actions often being determined by stimuli in our environment. I've touched a little on the work of Alex Pentland, whose experiments I mentioned briefly in an earlier post, and also the experimental work of psychologists John Bargh of Yale University and Ap Dijksterhuis of the University of Amsterdam.

To read a little of the original research, I suggest looking at this nice paper of Pentland, which explores two interesting ideas. First, that traditional behavioral science makes a big mistake by looking first to our rational thinking and verbal communications to explain what we do. He suggests that the baseline assumption should instead be that we, like other animals, often act for reasons that we're not really aware of, and communicate with other through instinctual and non-verbal means. In experiments, he shows that most of what we do (especially in our routines) can be accounted for in this simpler way. Second, he also argues -- and this I think is really interesting -- that our intelligence doesn't actually reside at the individual level, but at the level of the group. In other words, it's not the cleverness in our individual heads that makes people so capable, but the delicate and effective non-verbal communications that bind us into cohesive groups with agile collective behavior.

Tuesday, July 10, 2007

Winning by repeating -- Mark Buchanan

I'm going to have some more to say on the issue of the polarization of opinions, which I wrote about a little last week. Cass Sunstein emailed me with some insightful comments I'd like to share, but I first have to read and digest a couple papers he sent me.

Meanwhile, check out some new work that brings out in a disturbing way how the apparent consensus of a group can be strongly influenced by the loudest members. Suppose you go around and talk to the students in some class, listen to their opinions, and then later try to give an impression of the distribution of opinions within the group. Clearly you do better in terms of accuracy if you hear from lots of people, and so sample the group in a reasonably fair way.

But psychologist Kimberlee Weaver of Virginia Tech and colleagues found that real people don't form their views this way. Rather, if we encounter one person, repeatedly, who voices the same opinion many times, we tend to weight that opinion more strongly, even though it is obviously just one person's opinion. They found that if one person voiced an opinion, say, three times, it ended up being counted almost as much as if the opinion were voiced separately by three different people. The reason our minds do this, the researchers suspect, is that "the more often an opinion has been encountered in the past, the more accessible it is in memory and the more familiar it seems when it is encountered again." You can read the full paper here.

The conclusion is worrying: "repetition of the same opinion gives rise to the impression that the opinion is widely shared, even if all the repetitions come from the same single communicator."

This psychological propensity clearly feeds into the problem of "pluralistic ignorance," which I touched on in one of my New York Times essays. Unfortunately, our instincts for assessing what "most people think" can easily be led astray, especially by a powerful (and often less than responsible) media.

Thursday, July 5, 2007

A network future for web advertising

Ideas from a meeting on Complex Networks, 2-6 July, Sardinia, Italy

Andrei Broder, vice-president of emerging search technology at Yahoo, gave a nice talk this morning on the nature of web advertising and where it's going. It seems that network science is likely to play an influential role -- supporting the emergence of a powerful version of "algorithmic" advertising -- just as it has in web search.

Classic advertising, Broder pointed out, tends to work either by building image and brand -- counting on long-term allegience from consumers -- or by making more specific appeals to "act now" (say, by offering discount coupons). You find these all over the web, of course, as in ordinary print media or television, but what's different in the web is the incredible speed and volume. Whereas advertisers used to do surveys and think hard about where to place what kind of ad, increasingly the approach has to be algorithmic -- you need software to make decisions and place ads on a second-by-second basis, and to adapt rapidly to how consumers respond.

Learning to do this well (effectively bringing customers into contact with ads for things they realy want) is a big challenge, and systems today make lots of mistakes. Broder mentioned, for example, a recent New York Times article on the Lewis Libby affair, where ads showed up on the page for Libby Shoes, not exactly the connection for which the advertisers were presumably hoping. How to do this better? Broder suggests that a sophisticated mathematical/computational approach using complex network science may be the solution.

Web search was revolutionized by the PageRank algorithm, which makes vclculations on the entire network of linked pages in order to assign an "importance" to any one page. In the case of advertising, you can imagine an abstract network, where the links correspond to a trio of 1) user (the consumer), 2) the context (the web page) and 3) the advertisement. Based on historical data (which a company like Yahoo! can collect at the level of something like 10^12 points) you can (in principle) build this graph, adding edges for each trio where something positive happened (a click through). Then use this data, and do network analysis to try to predict other trios where you're likely to find success again.

I'm sure this kind of work will yield results pretty quickly, and I bet those Libby Shoes ads start finding more relevant outlets. The interesting thing, to me at least, is how rapidly the (online) advertising community has become quite sophisticated in a mathematical sense. This community has been taken over and driven by computer scientists, physicists and mathematicians -- but it also depends for success on a really new interaction between scientific disciplines, qhich is the only way to get the mathematics and algorithms to interact successfully with human psychological habits.

Wednesday, July 4, 2007

Learn about networks


Italian physicist Guido Caldarelli has an excellent new book out on networks. It's not exactly for the non-technical, but for anyone who really wants to learn the mathematical nuts and bolts of network theory, and to come to terms with degree distributions, adjacency matrices, clustering coefficients and the like.

If you want the more qualitative picture, you might begin with my Nexus (or either of two other popular books on the topic, Laszlo Barabasi's Linked or Duncan Watts' Small Worlds), but Guido's book is the first, to my knowledge, to put all the necessary technical information in one book that should appeal to technically-minded students interested in network science.

Damaging deliberations

Ideas from a meeting on Complex Networks, 2-6 July, Sardinia, Italy

In one of my recent New York Times columns, I explored the worrying polarization evident between the conservative and liberal bloggers in the US. I argued that it might well be the almost mechanical outcome of an amplifying feedback driven by simple psychological factors that influence how people form opinions and attitudes. First, the psychological phenomenon of “cognitive dissonance” tends to make us more comfortable with view that confirm rather than contradict our own. Second, people also have a strong tendency to adopt, even unconsciously, the attitudes of those with whom they interact. So the more conservative or liberal bloggers read the views in their own sphere, the more they're drawn into that sphere, express similar views, and end of living in an intellectual world of views that merely confirm their own.

That was speculation. But there's some exciting recent work that I think supports it -- some from network theorists here in Sardinia who are looking, rather abstractly, at the mathematics of opinion change within groups, and some from lawyers trying to address, in practical terms, how we might heal our polarization with greater deliberation. The lesson, I think, is that we had better be quite careful in what we do, or we could make the matter even worse.

Yesterday, physicist Renaud Lambiotte of the University of Liege in Belgium talked about his recent work (with physicists Marcel Ausloos and Janus Hoylst) in modeling the evolution of opinions. Their modeling suggests that two groups holding opposing views may quickly become reconciled, or remain at odds, and that what happens --and this is the important point -- can be very strongly influenced by the presence or absence of only a few social links between the groups. Here's the basic idea.

They supposed that individuals hold one of two opinions (it's not hard to think of some relevant issue), assigned randomly at the start. People in the model would then change their views, step by step, by a “majority rule” – each person would adopt (in the next time step) the opinion held by a majority of those with whom they were linked in the social network. The interesting thing to explore is how the structure of the network influences what ultimately happens.

Lambiotte and colleagues started by imagining two groups that were isolated from one another, or nearly isolated. Not surprisingly, perhaps, they found that people within each group quickly came to share one opinion. The groups came to a consensus, although the two groups were as likely to agree as disagree with each other. But the researchers then began adding social links between the groups, to see how this might change the outcome.

They found no change, at first, as the two groups continued to form opinions independently. But rather than a gradual increase in the way opinions “leak” from one group to the other as more connections are added, the researchers found a surprise when the number of links between the groups reached a precise threshold. Abruptly, the final opinions of the two groups were now always identical. Even a few extra links between groups were enough to “tip” their final opinions from a state of full polarization to full agreement.

This finding represents the social equivalent (in this simple model) of what physicists call a "phase transition", closely akin to the abrupt transformation of liquid water to ice. The interesting thing is that the change from one outcome to the other isn't gradual, but very rapid, and happens at some critical threshold of links between the groups. Near this boundary, a tiny alteration of the network structure can lead to drastic consequences.

Now this is quite abstract, but it may still be highly relevant to the real world. A number of studies have noted increasing polarization in recent years, not only in the web, but in geographical zones as well, with some parts of the US, for example, becoming more homogeneously conservative or liberal. Legal academics have been concerned with the consequences of this trend for our democratic discourse, and ability to come to collective decisions. What can we do?

Some legal theorists have suggested that one way to counter this trend would be to have special "deliberation days", during which people would come together in "town hall" meetings to discuss key issues. But some experiments carried out by Cass Sunstein and colleagues at the University of Chicago suggest that he outcome of such meetings can be counterproductive.

They had both liberals and conservatives from different cities in Colorado come together to discuss contemporary issues (gay marriage, the Iraq war, and so on) for a day, using surveys to gauge their views both before and after the deliberations. The liberals discussed issues among themselves, in one group, as did the conservatives, in another. What they found is that both groups became more extreme in their views during the discussions, and that the distance between the two groups became larger as a result. The conservatives became more conservative, and the liberals more liberal.

The lesson of these experiments is that those intermediary links between groups are absolutely essential to building overall consensus, and that, in their absence, we should expect an evolution toward greater extremism. But the more encouraging lesson from the network theory of Labiotte and colleagues is that only a few links between such groups can be remarkably successful in breaking down such polarization. Even if the situation seems bleak, and unchangeable, it may take only a few more contacts to seed a tremendous change.

Tuesday, July 3, 2007

The erratic rhythms of human life

Some ideas from a meeting on Complex Networks, 2-6 July, Sardegna, Italy

Most thinking about the rhythms of human life remains fixated on ancient concepts. For most psychologists and social scientists, physics is the physics of the nineteenth century or before -- the physics of Johannes Kepler or Isaac Newton. Mention "mathematical patterns", and they tend to think of regular cycles akin to planetary motion, or simple linear trends. What they don't realise is that modern physics has moved far beyond these very simple concepts, and today works with a much richer view of mathematical pattern, in which even highly irregular and erratic behavior may turn out to reveal hidden regularities.

Send an email, and sometimes you get a quick reply. Sometimes you don't. Naively, you might expect the response times to emails to work pretty much like lots of other random things in our lives; there would be an average time, with some random scatter about that. But in fact this isn't at all true -- emails come back at us with times that fluctuate in a wildly erratic way. Over the past few years, a number of researchers, including many physicists, have scrutinized the data on email responses and found that they don't follow "ordinary" random statistical patterns, but have what physicists refer to as "fat tails". You find, roughly speaking, that while the bulk of emails solicit quite rapid responses, there are many that instead get replies only after very long times, weeks or months.

This implies, among other things, that email correspondence has a naturally "bursty" character, with long intervals of quiet interrupted sporadically by lots of activity. It's not regular and predictable at all.

Curiously, you find the same pattern also in how frequently people visit libraries, in patterns of web surfing and in lots of other individual human activities. This irregular behavior seems to be a kind of universal rhythm that typifies human activity. (In fact, a psychologist from Texas Dave Gilden, has even found very similar irregular patterns in the actions of people who try to do the same task repeatedly, such as tapping their fingers in exact one-second intervals; we seem to do it even if we try not to.)

What's the cause? One might suspect something in the character of the human brain, perhaps, but physicist Laszlo Barabasi and colleagues suggest a much simpler idea -- that much of it comes down to how we prioritize tasks. Suppose that you, like all of us, have lots of tasks you need to take care of -- shopping, doing the dishes, sending a letter and so on. You might just choose items at random off the list, and do them. If you chose which emails to respond to this way, it turns, out then the times for email responses wouldn't be what they are. They'd follow the Bell curve, with a nice average for the time between emails.

But suppose instead that some emails have more priority than others, and you tend to respond to them first, while pushing those of lesser priority down the list. The mathematics shows that in this case, because emails of lesser priority keep getting pushed down the list, what naturally comes out are fat tails -- and an amplified chance for email reponses to take a surprisingly long time. It's not human procrastination, it seems, but the more or less mechanical consequence of how we deal with sequences of tasks.

There are probably many more surprises of this kind of work to come. For one thing, as Laszlo pointed out, it has a curious link to some work of several years ago that found similar fat tails in the way people move around. The way we move from place to place seems to have the same erratic quality as do our email reponse or library visiting patterns. But these fat tails also show up throughout nature, in the way natural disasters and earthquakes strike, in the ups and downs of markets and so on. (I wrote a brief overview a couple years ago for the business magazine strategy + business.) Indeed, explaining these so-called "power-law" patterns in nature has been big business in physics for twenty years, and will continue that way for some time.

This work doesn't actually have any direct link to networks. But it certainly demonstrates how individuals, without knowing it, end up following quite striking and seemingly universal patterns. And how human science, if it is going to understand human dynamics more effectively, is going to have to embrace the irregular mathematics of modern physics.

Monday, July 2, 2007

Predicting epidemics

Some ideas from a meeting on Complex Networks, 2-6 July, Sardegna, Italy

With the threat of a global outbreak of the H5N1 virus hanging over our collective heads, it's natural to wonder about our science of prediction. We can send satellites into the remotest regions of the solar system, and predict some quantities of fundamental physics to one part in 10 billion. Boeing aircraft designs its new aircraft with computational simulations so accurate that test flights are no longer necessary; in fact, as one of their executives told me last year, Boeing only does flight tests to reassure an uneasy public! Given the power of today's science, shouldn't we be able to predict the likely outcome of a new viral outbreak?

One possible response is that we shouldn't hope to be so ambitious, because the spread of disease depends not only on biological factors -- the nature of the virus, for example -- but on what individual people do, on who meets with whom and where people travel. It's deeply entangled with free will and human psychology and all the unpredictability of human behavior, and so we shouldn't be surprised if the fate of an epidemic is a matter if chance and guesswork. But is the situation really so hopeless? Increasingly, network scientists don't think so. It's may just be a matter of bringing the right data to bear, and paying attention to the surprising architecture of real-world networks -- such as the network of international air travel.

The fact is that people aren't so unpredictable, especially at the collective level, and technology is making it possible to map out human interactions with more detail than ever before. Using mobile phones, for example, researchers have been able to build up detailed pictures of the social links between people in various communities. A couple years ago, researchers at Los Alamos National Lab used information gathered this way (and from more traditional surveys) to build a computational model that could mimic the evolution of an epidemic within a city by following the second-by-second movements of millions of individuals on their daily paths. This is the social equivalent of Boeing's flight simulations. With this computational tool, you can do experiments to test the consequences of various interventions. What happens if you close the schools, perhaps, or try to reduce the movement of people by public transport? One thing the Los Alamos group found was that the timeliness of the response is absolutely crucial -- measures save many more lives if they're implemented very early on in the course of the epidemic.

Yesterday morning, physicist Alessandro Vespignani spoke about recent work with Vittoria Collizza and other members of his group at the University of Indiana, which has been aiming to bring data on international air travel into such models, which is probably the most important factor for epidemic spread at the global level. They've used a massive data set for something like 3,100 airports worldwide, and 20,000 regular flight paths (you can see some animations of such data here), which reveal the larger-scale human flows around the world. Using this data to then model the spread of a disease -- introduced at one point, say, in Vietnam.

This modeling effort is the most ambitious yet to try to bring the data we have to bear on understanding what we're likely to face with an influenza pandemic. The first surprise that emerges from it is that trying to control the epidemic by reducing the flow of people -- taking the obvious step of restricting traffic through all airports, for example -- is remarkably ineffective. Even reducing the number of people passing through airports by as much as 50% has virtually no effect on the ultimate spread of an epidemic. To have much influence, the models suggest, authorities would have to reduce airport traffic by as much as 90% everywhere -- which from an social and economic point of view is probably a non-starter.

This may be a negative lesson, but at least it helps authorities know what NOT to waste their efforts on. A more positive message that emerges from this work is that the cooperative sharing of antiviral drugs between countries may well be the best way to the stem the spread of such a disease. (Unfortunately, I have to wonder, how likely is that?)

One other interesting point to emerge from this recent work (discussed more in this paper) is that the outcome of an epidemic may well be more predictable than one might expect. They've run their simulations over many times, seeding an epidemic with the same initial conditions. Although the simulations include lots of probablistic events (it depends on the virus passing between people, after all, which are chancy events), the overall outcome remains roughly the same. The reason, they suggest, is that the global air transport is dominated by channels going between major airports. These seems to act as preferred pathways or conduits along which the virus tends to travel -- and obviously represent good targets for, say, monitoring people for infection (if feasible).

This work obviously has huge implications for our collective well being. But it also makes the point that understanding social processes, especially at the largest collective level, isn't really hampered at all by the mysteries of human psychology. In many ways, we're akin to particles following fairly simple rules, and careful science can learn how to follow and hopefully influence those movements in an intelligent way.

Complex Networks, 2-6 July, Sardinia, Italy

Even at 8 a.m., the sun is blazing hot outside, although it's cool and calm here in the Edificio II of Sardegna Ricerche, a hulking Soviet-style research building in the beautiful mountains near Cagliari, Sardinia. I'm at a satellite meeting of the yearly meeting on Statistical Physics. This satellite (graciously organized and hosted by Alessandro Chessa of the University of Cagliari and Guido Caldarelli of the University of Rome) is focussing on complex networks -- things like social networks, the Internet, food webs, and so on. This is a hot topic in physics, indeed, all of science, and something on which I wrote a book several years ago.

If you look at the physical layout of the Internet (computer linked by telephone lines or satellite links), or the wiring pattern of neurons in the human brain, or the tangled web of social bonds that links together a community (in the image to which I've linked, these are friendships between high school students), you'll see in each case what looks like an unintelligable mess. You'd see the same bewildering complexity in the network of predator-prey relationships in any ecosystem, and in many other settings -- in networks of economic trade, for example. But in fact these and many other natural networks, despite their apparent complexity, possess a hidden order and share deep architectural similarities.

Physicists and mathematicians over the past decade have begun learning how to understand the architecture of such networks, and to build up a real science that explains how and why they have the structures they do. In my book Nexus (or Small World, in the UK) I tried to offer a snapshot of this recent explosion of research.

But lots of work has happened since then, and its seems this exploding field attracts more attention every year, mainly because computers have made it possible to gather and analyze the huge amounts of data that make it possible to map out real world networks. Research on social networks is particularly important, and is showing that the human world often follows precise mathematical patterns that were unsuspected only a few years ago. Over the next few days, I'll try to report on some of the major interesting developments in this field.

Recently in this blog, I've tended to focus on the psychologial side of the social atom -- on the behavior of people as individuals and what influences it. This conference represents work on the other side -- looking at the mathematical patterns that emerge at the larger scale.