Tuesday, July 03, 2012

The Weakness of Strong Reciprocity (Part 2)

In my previous post, I indicated that one weakness in the arguments made by Bowles and Gintis for strong reciprocity was manifest in the experimental research of Vernon Smith and his colleagues: putting the Ultimatum Game into a context of property rights and providing absolute anonymity for dictators in the Dictator Game drastically reduce the behavior that could be interpreted as strong reciprocity.

The variability in the play of these games due to the social context points to a second weakness--it's never clear whether these artificially contrived games are revealing anything about the real world of everyday social interaction, and therefore it's never clear whether these games are providing evidence for strong reciprocity.

When researchers first began conducting behavioral game experiments, they designed the experiments so that the players would be influenced only by monetary calculations.  But now it seems that this is almost never the case, because the behavior of the players is influenced by at least five other factors, which have been surveyed in an article by Steven Levitt and John List (2007), who argue that this casts doubt on the validity of these experiments.  First, the games are often designed to elicit moral or other-regarding behavior in ways that might not often arise in the economic and social interactions of real life.  Second, the players typically feel themselves to be under the scrutiny of the experimenters in ways that would not be true in everyday life.  Third, those individuals who are recruited to participate in these experiments might not be typical of the human population: most of them are undergraduate students in the United States.  Fourth, the contexts in which the games are framed can influence behavior:  for example, as indicated earlier, calling an Ultimate Game an "exchange" game influences how the players behave.  Moreover, the social context that subjects bring into the game from their social experience can differ from the context that the experimenter is trying to create in the lab.  Finally, the level of the monetary stakes involved in a game can determine whether monetary gain becomes an overriding motive.

Bowles and Gintis summarize the article by Levitt and List in one paragraph (41).  Oddly, they fail to mention the last point--the influence of the monetary stakes. 

As one example of how the scrutiny of the experimenters influences behavior, Levitt and List note that in one experiment, subjects who had never contributed to a charity in real life contributed to the charity in a dictator game.  Bowles and Gintis admit that this shows "that one can never extrapolate directly from the laboratory to behavior in natural settings" (42).

Bowles and Gintis assume that when subjects are playing a one-shot game in the laboratory, they won't be concerned with building or maintaining a good reputation so that other people might cooperate with them in the future.  This is important because actions influenced by a concern for one's reputation cannot show strong reciprocity, for which there can be no expectation of future reciprocation.  But it is often clear that subjects are playing the games in the lab in the context of their past experiences with social interactions in which one's reputation is important.  The cross-cultural experimental research of Henrich et al. (2004) show that subjects with different cultural experiences play the same games differently because they are acting in the context of their cultural life outside the experimental game.  It seems likely, therefore, that even when subjects are told by experimenters that they are playing a one-shot, anonymous game in which they have no chance to build or maintain a reputation, the subjects might still be influenced by psychic propensities rooted in their concern for reputation.

Bowles and Gintis argue that this is unlikely, because "humans are perfectly capable of distinguishing between situations in which reputation building and retaliation against free-riding are possible and situations in which they are not" (94).  Similarly, in an earlier essay, they declare: "We do not think that subjects are unaware of the one-shot setting, or unable to leave their real-world experiences with repeated interactions at the laboratory door.  Indeed, evidence is overwhelming that humans readily distinguish between repeated and nonrepeated interactions and adapt their behavior accordingly" (2003, 432).

To this, Robert Trivers reponds:

Surely, awareness is irrelevant.  You can  be aware that you are in a movie theatre watching a perfectly harmless horror film and still be scared to death.  As for leaving real-world experiences at the laboratory door, I know of no species, humans included, that leaves any part of its biology at the laboratory door; not prior experiences, nor natural proclivities, nor ongoing physiology, nor arms and legs, nor whatever.  This is the whole point of experimental work.  You bring living creatures into the lab (ideally, whole) to explore causal factors underlying their biology, the mechanisms in action.  You do not imagine that you have thereby solved the problem of evolutionary origin; that is, that you can shortcut the problem of evolutionary function by simply assuming that the organism's actions in the lab represent evolved adaptations to the lab. (2006, 79)

Trivers argues that we respond to one-shot encounters as if they were part of an on-going chain of social interactions.  So if someone in an Ultimatum Game makes us an unfair offer, we get angry with them and reject the offer, because both our evolutionary history and our individual history have shaped us to maintain a reputation for being indignant when we're treated unfairly.

If Trivers is right, then we should expect to find ethnographic evidence that in those foraging societies most like those of our distant evolutionary ancestors, people show a sense of justice rooted concerns for kin and for reciprocity (both direct and indirect).  If Bowles and Gintis are right, then we should expect to find that people in small-scale societies engage in the costly punishment that identifies strong reciprocity--the punishment of those who violate social norms even when that punishment is personally costly to the punisher in ways for which there can be no payback for the punisher.

The third weakness in the argument of Bowles and Gintis for strong reciprocity is that there is very little, if any, clear ethnographic evidence collected by anthropologists in the field that shows such costly punishment.  Recently, Francesco Guala has surveyed the relevant research in an article in Behavioral and Brain Sciences (2012), and he has concluded: "there is no evidence that cooperation in the small egalitarian societies studied by anthropologists is enforced by means of costly punishment" (1).  Of course, as Guala indicates, there is plenty of evidence that small egalitarian societies punish behavior that violates their customary norms.  But this punishment is done collectively so that the cost is distributed across many individuals, and therefore no single individual bears an absolute cost that is unlikely to be recouped somehow in the future.

In their commentary on Guala's article, Gintis and Fehr declare: "anthropologists have confirmed that strong reciprocity is indeed routinely harnessed in the support of cooperation in small-scale societies" (2012, 28).  But as Guala indicates in his response, none of the studies they cite provide field research that clearly shows costly punishment.  They cite the work of Polly Wiessner and Christopher Boehm.  But in their commentaries on Guala's article, neither Wiessner nor Boehm clearly support strong reciprocity with ethnographic studies.  Wiessner says that "experimental and ethnographic evidence do not concur," and "whether positive and negative reciprocity are costly and thus truly 'strong' is difficult to measure in the field" (2012, 44).  She stresses how small-scale societies use institutional practices to minimize the costs of maintaining cooperation for single individuals.  Boehm concludes: "Hunter-gatherer punishment involves costs and benefits to individuals and groups, but the costs do not necessarily fit with the assumptions make in models that consider punishment to be altruistic" (2012, 19).

So, again, I conclude that the evidence suggests that strong reciprocity is weak--that only a few people will act as strong reciprocators, and in most cases, even they will do this only as long as the costs for them are very low.  Actually, when Bowles and Gintis are challenged, they have to admit that to defend the reality of strong reciprocity they have to concede its weakness.

I'll continue this in another post.


Boehm, Christopher (2012). "Costs and Benefits in Hunter-Gatherer Punishment."  Behavioral and Brain Sciences 35: 19-20.

Bowles, Samuel, and Herbert Gintis (2003). "Origins of Human Cooperation."  In Peter Hammerstein, ed., Genetic and Cultural Evolution of Cooperation, 429-443.   Cambridge: MIT Press.

Gintis, Herbert, and Ernst Fehr (2012).  "The Social Structure of Cooperation and Punishment."  Behavioral and Brain Sciences 35: 28-29.

Guala, Francesco (2012).  "Reciprocity: Weak or Strong? What Punishment Experiments Do (and Do Not) Demonstrate."  Behavioral and Brain Sciences 35: 1-59.

Henrich, Joseph, et al., eds. (2004).  Foundations of Human Sociality: Economic Experiments and Ethnographic Evidence from Fifteen Small-Scale Societies.  New York: Oxford University Press.

Trivers, Robert (2006).  "Reciprocal Altruism: 30 Years Later."  In P. M. Kappeler and C. P. van Schaik, eds., Cooperation in Primates and Humans, 67-83.  Berlin: Springer-Verlag.

Wiessner, Polly (2012).  "Perspectives from Ethnography on Weak and Strong Reciprocity."  Behavioral and Brain Sciences 35:44-45


Roger Sweeny said...

Fascinating series of posts. Are any of the references available online? I keep wanting to click on them.

Larry Arnhart said...

Many of Gintis's articles can be found at his website.

BEHAVIORAL AND BRAIN SCIENCES is also available online.