Sunday, January 25, 2015

Strauss, Melzer, and Esoteric Writing: "There Are No Gods But the Philosophers"

Arthur Melzer's new book--Philosophy Between the Lines: The Lost History of Esoteric Writing (University of Chicago Press, 2014)--provides massive evidence and argumentation in defense of Leo Strauss's claim that political philosophers have written esoterically for thousands of years, although in the past two centuries, this has been largely forgotten. 

And yet, Melzer's book also suggests that modern liberalism's success over the past two centuries shows that esoteric writing is not necessary or desirable in a liberal open society, which appears to refute Strauss's core teaching that the philosophic life of the few as the only naturally good life must be in conflict with the miserable life of the many that depends on moral, religious, and political delusions.

Except for Strauss's short essay on "Persecution and the Art of Writing," most of the writing by Strauss and his students on esoteric writing has been interpretations of particular philosophers as esoteric writers.  Melzer's book is the first book to offer a synoptic view of the history of esoteric writing and its philosophic implications.

His book is divided into three parts.  In the first part, he lays out the general evidence and argument for the reality of philosophical esotericism.  In the second part, he explains the four forms of philosophical esotericism: defensive esotericism, protective esotericism, pedagogical esotericism, and political esotericism.  In the last part, he explains the consequences of the recovery of esotericism by offering a guide to esoteric reading and by showing how recognizing esotericism was important for Strauss in defending reason against historicism.

In the first part, he surveys the testimonial evidence for esotericism; he explains the theoretical basis for philosophical esotericism; and he takes up the objections and resistance to esotericism.  The extent of the testimonial evidence is stunning.  In fact, it is so extensive that he offers only a sample of it in the book and refers the reader to an online appendix with 110 pages of quotations beginning with Homer and ending with Wittgenstein.

The theoretical basis of philosophical esotericism is what Melzer calls the "problem of theory and praxis."  This is the core teaching of Strauss that there must always be an irresolvable conflict between the theoretical or philosophic human being and the practical or moral human being.  These are fundamentally different types of human beings.  The very few people who can live a truly philosophic life live the only naturally good and happy kind of life, which is devoted to the quest for truth.  Most people live moral, religious, and political lives based on opinion rather than truth, and thus based on illusory goods that bring misery rather than true happiness. 

This makes esoteric writing natural, necessary, and desirable.  Those few who live philosophic lives must write esoterically to defend themselves from persecution by the many (defensive esotericism), to protect the many from being harmed by the terrible truths that philosophy discovers (protective esotericism), and to teach the secret truths of philosophy to those very few young people capable of becoming philosophers (pedagogical esotericism).

The ancient and medieval philosophers saw that this conflict between these two kinds of life--the contemplative life and the active life--was so natural, necessary, and desirable that it could never be overcome.  And therefore the premodern philosophers never attempted to establish a rational society in which all of society would be open to the truths of philosophy or science.

But the modern philosophers thought such a rational society was possible and desirable, and so they initiated the modern Enlightenment project to destroy traditional or closed societies based on opinion and to establish modern or open societies based on truth.  They saw, however, that this battle against traditional opinion would have to be fought over centuries before the triumph of the new Enlightened society.  And during this period of intellectual warfare, the philosophers would have to use a new kind of esoteric writing (political esotericism) to temporally hide their political project of Enlightenment.  They had to employ esotericism for the sake of eventually eliminating the need for esotericism.  They would have to lie so that someday lying would be unnecessary.

Indeed, by about 1800, according to Melzer, this modern liberal project was so successful that modern philosophers no longer saw any need to hide their teachings.  Consequently, the reality of esoteric writing prior to 1800 was forgotten; and so when Strauss tried to revive the understanding of esoteric writing, most scholars dismissed this with scorn, irritation, and ridicule.  Of course, for many of Strauss's students, this teaching of esoteric writing and esoteric reading was what made Strauss so seductively attractive, with the promise of being initiated into the secret teachings of philosophy that could be safely revealed only to the naturally superior few.

Melzer also shows how important Strauss's teaching about esotericism was for his defense of reason against historicism.  The forgetting of esotericism after 1800 supported historicism, because it allowed readers to assume that the surface teaching of the philosophers that endorsed the prevailing opinions of their time showed that philosophers were always historically determined in their thinking, and thus the human mind could not transcend history in the pursuit of truth.  By contrast, Strauss and his students could argue that the appearance of conformity to popular opinions was illusory, and that reading the philosophers esoterically could uncover secret teachings that showed the philosophers seeking transcendent truths beyond historically determined opinions.

Was Strauss right about all of this?  Melzer's answer is ambiguous.  He never clearly and explicitly denies any of Strauss's claims.  But he does give his readers intimations that Strauss might have been at least partially wrong.

Melzer repeatedly states that the issue for his book is not whether we today approve of or practice esoteric writing, but whether philosophers of the past approved of and practiced esoteric writing (98, 101-102, 115, 163, 206-207, 228, 283).  About that issue, Melzer is clear in agreeing with Strauss:  the evidence that philosophers generally practiced esoteric writing prior to 1800 is persuasive.

Melzer also seems to agree with Strauss's account of the difference between the premodern philosophers and the modern philosophers--that the ancient and medieval philosophers were "conflictualists" who denied that the conflict between philosophy and politics could ever be overcome, and that the modern philosophers were "harmonists" who thought the conflict could be resolved with the establishment of a rational society.

Melzer also seems to agree with Strauss about the disagreement within modern philosophy between the Enlightenment thinkers and the Counter-Enlightenment thinkers.  All think that reason and politics can be made harmonious.  But they disagree on how exactly this is achieved.  The Enlightenment thinkers see the harmony as achieved by the subordination of politics to the rule of reason.  The Counter-Enlightenment thinkers see the harmony as achieved by the subordination of reason to the rule of politics.

But was Strauss right in endorsing the position of the premodern philosophers--that it was impossible to overcome the conflict between philosophy as the only naturally good human life and the moral, religious, or political life as a miserable life of delusion, and therefore that esoteric writing will always be natural, necessary, and desirable?

Melzer implies that he disagrees with Strauss about this.  Consider this statement by Melzer:
"My friends and colleagues all regard it as curious that I should be the one to write this book.  There are people who have a real love for esoteric interpretation and a real gift for it.  I am not one of them.  My natural taste is for writers who say exactly what they mean and mean exactly what they say.  I can barely tolerate subtlety.  If I could have my wish, the whole phenomenon of esoteric writing would simply disappear." (xvii)
If Melzer rejects esoteric writing, if he thinks it is unnecessary and undesirable, then he must think that Strauss and the premodern philosophers were wrong in believing that esoteric writing was necessary and desirable because the conflict between the philosophic life and the practical life could never be overcome.

In various places in his book, Melzer does say that the modern liberal goal of harmony between reason and society has been achieved or at least approached in modern open societies to the point that esotericism is no longer necessary or desirable (see, for example, 5, 92, 98, 101, 105, 115, 119, 121, 129, 134-43, 153, 159, 163, 168-73, 196-98, 200-203, 206-207, 234, 236, 246, 249, 366, 383-84).  That the philosophic life as based on truth must threaten the social life based on opinion is perhaps true for the traditional societies that have dominated most of human history, but it is not true for the modern liberal societies that have emerged in many parts of the world over the past two centuries. 

Melzer declares: "the idea of subversive truth has little plausibility today.  We citizens of the enlightened, secular, liberal, pluralist, multicultural society have dared to open our doors to every idea and doctrine and have discovered, at length, that all the supposed dangers of doing so were greatly exaggerated.  So we are inclined to ask with some skepticism, not to say condescension: exactly how is it that truth or philosophy is a threat to society?" (168-69).

For example, Melzer indicates, one manifestation of the conflict between reason and politics in traditional society is that slavery was necessary in civilized societies, and philosophers like Aristotle had to write exoterically in support of slavery as natural, while writing esoterically to teach that slavery was unnatural and thus unjust.  But the triumph of liberalism allowed for the abolition of slavery, so that Aristotle's esoteric truth could be publicly embraced (196, 323).  And while the historicist will say that Aristotle's endorsement of slavery shows that he was held captive by the opinions of his time, the practice of esoteric reading can show that he understood the truth about slavery that could not be publicly recognized in his society.

According to Strauss, the premodern philosophers believed that "the gulf separating 'the wise' and 'the vulgar' was a basic fact of human nature," and that "public communication of the philosophic or scientific truth was impossible or undesirable, not only for the time being but for all times" ("Persecution and the Art of Writing," 34). 

If Strauss agreed with this, then that would mean that he thought that liberalism must be a dangerous delusion, and that he must write esoterically to hide his opposition to liberalism.  As Strauss wrote, "if I know that the principles of liberal democracy are not intrinsically superior to the principles of communism or fascism, I am incapable of whole-hearted commitment to liberal democracy" (What Is Political Philosophy?, 222).  We would then have to wonder what kind of alternative he had in mind--what kind of illiberal closed society he would prefer. 

Melzer is completely silent about Will Altman's argument that Strauss did engage in esoteric writing in promoting an illiberal alternative to liberal democracy.  He is also silent about Strauss's professed devotion to "fascistic, authoritarian, imperial principles" and his refusal to crawl to the cross of liberalism (in a letter to Lowith in 1933).

In the last paragraph of Melzer's book, he asserts that while Strauss believed he needed to practice esoteric reading, he did not believe that he needed to practice esoteric writing, because he saw no need to overturn the Enlightenment (366).  If that is true, then Strauss must have thought that the premodern philosophers were wrong in believing in the irresolvable conflict between philosophy and politics.  If that is true, then Strauss did not believe in the premodern conception of the philosophic life as a transcendent life, as the only naturally good life.  I am not persuaded by Melzer that this was Strauss's position.  And it's remarkable that Melzer offers no reference to any of Strauss's writing that would support the assertion in this last paragraph.

Melzer himself rejects the premodern conception of the philosophic life.  He says that he is not a philosopher in this sense, and that he has never met anyone who is.  Moreover, he doubts that such a life is even possible (380, note 1).

Oddly, Melzer does not point out that in doubting the reality of this Straussian ideal of the philosophic life as the only naturally good life, he is in agreement with Shadia Drury, who identified this as Strauss's core teaching, and who criticized it as both false and dangerous.  Melzer cites Drury only once in a footnote (383, n. 18).

As Melzer indicates, the classical ideal of the philosophic life as showing "the transcendence of ordinary life" stands to the nonphilosophic life as the divine stands to the human (71-72).  Here Melzer could have quoted Strauss's remark that "if we understand by God the most perfect being that is a person, there are no gods but the philosophers" ("Reason and Revelation," 163).  The serpent in the Garden of Eden told the truth when he told Adam and Eve that on the day they ate the fruit of the Tree of the Knowledge of Good and Evil, they would become like gods (ibid., 169).  By contrast with the divine life of the philosopher, all other human lives are "forms of human misery, however splendid," because they are based on "despair disguised by delusion" (ibid., 146-47).  If Strauss is endorsing this as true--philosophy as divine transcendence of merely human life--then he must have believed that it would be impossible for liberalism to overcome the conflict between the philosophic life and the practical life.

Surprisingly, however, Strauss never clearly offered any proof that the philosophic life was the only naturally good life.  He did occasionally point to Aristotle's arguments for the supremacy of the philosophic life in Book 10 of the Nicomachean Ethics.  Melzer also does this (72, 75, 176).  But neither Strauss nor Melzer reflect on how dubious those arguments are, particularly when considered in the context of the Nicomachean Ethics as a whole; and they never reflect on how Books 8-9 (on friendship) offered a different conception of philosophy--not as the dominant end of life, but as one of the inclusive ends of life.  They never consider the possibility that the teaching in Book 10 is Aristotle's exoteric teaching, which appears to endorse the Platonic ideal of the philosopher, while subtly undermining that ideal by making weak arguments in its favor. Nor do they reflect on how Aristotle's account of philosophic friendship in a pluralist society resembles what liberals like Adam Smith and David Hume said about philosophic friendship in a commercial society.

If modern liberalism is to succeed in achieving a largely open society with freedom of thought and speech in which the philosophic life and the practical life are in harmony, then liberalism would have to show that there are no deadly truths that are harmful to nonphilosophers.  Strauss and Melzer identify the "most terrible truth" as the truth taught by Lucretius--that "nothing lovable is eternal or sempiternal or deathless, or that the eternal is not lovable" (Melzer, 195-96; Strauss, "Notes on Lucretius," 85, 100, 135). 

Lucretius taught this as part of his evolutionary teaching--that everything has evolved, including human beings, so that the human species is enduring but not eternal.  Those who believe that the human good must be grounded in some cosmically eternal good that has not evolved--a Cosmic Nature, or Cosmic Reason, or Cosmic God--will see this teaching as the "most terrible truth" of nihilism.  But those who accept Darwinian natural right will be satisfied with grounding the evolved human good in human nature, human culture, and human judgment.

I have elaborated some of these points in previous posts here, here, here, here, here, here, here, here, here, and here.

Wednesday, January 21, 2015

On Holloway, "Strauss, Darwinism, and Natural Right"

Leo Strauss thought the crisis of natural right arose because the teleological view of the universe that supported classic natural right has apparently been refuted by modern natural science.  I have argued, however, that a Darwinian understanding of the immanent teleology of life, including human life, can resolve this crisis by supporting a Darwinian conception of natural right.  Carson Holloway has criticized my argument for failing to recognize that any conception of natural right depends on a  "religiously informed cosmic teleology" that is denied by Darwinian science.

In one of his papers--"Strauss, Darwinism, and Natural Right"--Holloway suggests that Strauss himself agreed with him on this.  This paper was published in The Human Person and a Culture of Freedom, edited by Peter Pagan Aguiar and Terese Auer (Catholic University of America Press, 2009), 106-129.  It can also be found online. 

Although I disagree with Holloway's general argument, I do think he has correctly pointed to a strange kind of religious or quasi-religious teleology in Strauss's writing about natural right.

Holloway accurately restates my claim that while modern natural science--and particularly Darwinian science--denies any cosmic teleology, it can affirm an immanent teleology that sustains natural right.  "That is, Darwinism demonstrates how a purposeless cosmos can give rise to purposeful beings, beings with an internal teleology from which we can derive natural standards of right" (108).  If we understand the human good as the desirable, and if we see that the evolutionary process has endowed human beings with at least 20 natural desires, then we can see the fullest and most harmonious satisfaction of those natural desires as a standard of natural right inherent in the teleology of human nature.

Holloway responds to this by suggesting that Strauss would not have accepted this as a resolution of the problem of natural right, because Strauss saw classic natural right as concerned with human excellence or perfection, which transcends the concern in Darwinian natural right with mere human decency or ordinary goodness.  What distinguishes human perfection from human decency is that while human decency is "anthropocentric," human perfection is "cosmocentric" (114).  "Humanism is not enough," according to Strauss, because to understand natural human perfection, we must look up to the "superhuman" and not down, as the Darwinian evolutionist does, to the "subhuman" origins of the human.  We must see, as Strauss says, that man is not "an accidental product of a blind evolution," but the result of a "process leading to man, culminating in man," and "directed toward man" (128).

In Natural Right and History, Strauss distinguishes American social science from Catholic social science in how they handle this issue: "Present-day American social science, as fare as it is not Roman Catholic social science, is dedicated to the proposition that all men are endowed by the evolutionary process or by a mysterious fate with many kinds of urges and aspirations, but certainly with no natural right" (NRH, 2). 

Holloway observes: "Strauss exempts Catholic social science from this difficulty presumably because it posits a cosmic hierarchy in light of which these various 'urges and aspirations' can be evaluated and a cosmic teleology in light of which they can be seen as products of an evolutionary process guided by a benevolent cosmic intelligence.  Darwinian naturalism, however, rejects such notions and is therefore left with only the (seemingly inadequate) fact of these various aspirations and their unintelligent origins" (118).  So, without the cosmic teleology guided by a "benevolent cosmic intelligence," Holloway argues, my Darwinian natural right as grounded on the natural desires of an evolved human nature has no way to rank those various and conflicting desires according to any natural standard of human perfection.

My first response to this is to point out that Holloway is completely silent about my emphasis on the need for prudence and habituation in organizing our often conflicting desires into a coherent pattern as conforming to our conception of a whole life well-lived (see Darwinian Natural Right, 23-24, 36-49).  He is also silent about my claim that while the generic goods of human life as set by the 20 natural desires is a universal standard for the human species, the ranking and organization of those generic goods requires prudence in judging how this can be done to conform to the nature of each individual.  The naturally good life for a man like Socrates, for whom the natural desire for intellectual understanding must be ranked at the top, is not the naturally best life for those who are not Socratic philosophers.

Apparently, Strauss disagreed with me here.  For he thought that the philosophic life of Socrates was the only naturally good life, and that all other lives--the merely moral lives of ordinary decency--were actually lives of misery and delusion.  Moreover, this ranking of the philosophic life as the only naturally good life was, according to Strauss, rooted in a transcendent cosmic order.

Holloway notes the "certain otherworldliness" in Strauss's "transcendent" conception of the philosopher as standing at the peak of a cosmic hierarchy (110-12).  But Holloway does not reflect on how strange this is. 

How can this "transcendent" conception be consistent with Strauss's denial of Platonic metaphysical dualism and his insistence that Plato was not a Platonist?  It is true that in some of the passages cited by Holloway, Strauss does seem to endorse the cosmology of the "Great Chain of Being" that dominated Western culture for two millennia through the influence of Plato's Timaeus.  But this seems to contradict Strauss's claim that this Platonic cosmology is Plato's exoteric teaching, not his esoteric teaching.

If there is a "benevolent cosmic intelligence," as Holloway indicates, would Strauss say that this is the philosopher?

In his 1948 lecture on "Reason and Revelation," Strauss has a dialogue between "the philosopher" and "the theologian."  Here is one passage:
"The philosopher: denies that human self-assertion and love of truth are incompatible.  For we have a selfish need for truth.  We need the eternal, the true eternal (Plato's doctrine of eros).  The kinship between philosophia and philotimialasting fame possible only through knowledge of the truth.  The most far-sighted selfishness transforms itself into, nay, reveals itself as, perfect unselfishness.
"The theologican: philosophy is self-deification; philosophy has its root in pride." 
 "The philosopher:  if we understand by God the most perfect being that is a person, there are no gods but the philosophers (Sophist in princ: theos tis elengktikos).  Poor gods?  Indeed, measured by imaginary standards. --As to "pride," who is more proud, he who says that his personal fate is of concern to the cause of the universe, or he who humbly admits that his fate is of no concern whatever to anyone but to himself and his few friends." (163)

 I don't think Holloway agrees that "there are no gods but philosophers."  Does this even make any sense at all?

Strauss's assertion here that philosophers are as gods reminds me of Nietzsche's warning in Human, All Too Human (sec. 164) that any belief that some minds are "superhuman" (ubermenschlich) is a "religious or half-religious superstition."  Of course, Nietzsche himself later in his life affirmed the philosopher as the Ubermensch; and that was the Nietzsche that attracted Strauss, who was completely silent about Nietzsche's evolutionary science and democratic liberalism in Human, All Too Human.

I will come back to this strange "otherworldly" Straussian conception of the philosophers as gods in my comments on Arthur Melzer's new book--Philosophy Between the Lines: The Lost History of Esoteric Writing.

Strauss's "Reason and Revelation" lecture was first published in Heinrich Meier's Leo Strauss and the Theological-Political Problem (Cambridge University Press, 2006).  It can also be found online.

I have a previous post on this lecture, also here, where I note that Strauss puts Darwinian evolution on the side of philosophy as opposed to revelation.  I have written many posts over the years responding to Holloway and commenting on Strauss and the problem of natural right.

Wednesday, January 14, 2015

Harry Jaffa, 1918-2015

                   Harry Jaffa in 1959 with his newly published Crisis of the House Divided

Harry V. Jaffa died last Saturday at the age of 96.  The New York Times has a good obituary.

Jaffa was famous among political theorists for his scholarly studies of Aristotle, Thomas Aquinas, Abraham Lincoln, and the American founding.  He studied with Leo Strauss at the New School for Social Research, and he earned the reputation as one of the deepest thinkers among Strauss's students.  In the Strauss Wars, he was the leader of the West-coast Straussians.

Academic scholars rarely receive prominent obituaries in The New York Times.  But as this obituary indicates, he gained some prominence in American history as a "conservative scholar and muse for Goldwater."  I remember well watching Barry Goldwater's televised speech accepting the Republican nomination for President in 1964, which included two famous lines:  "I would remind you that extremism in defense of liberty is no vice.  And let me remind you also that moderation in the pursuit of justice is no virtue."  Goldwater's critics had warned that he was a right-wing extremist who was linked to extremist groups like the John Birch Society.  So many people were surprised and shocked that he would boldly embrace the label of extremism.  Only few people knew at the time that these words had been provided to Goldwater by Jaffa, who was involved in the writing of the speech.

I have heard that Strauss sent Jaffa a letter telling him that recommending such language to Goldwater was a big mistake, because it would contribute to Goldwater's defeat.  Later, Jaffa explained that he never thought Goldwater had any realistic chance of winning in 1964, and that the real goal of the Goldwater campaign was for conservatives to take control of the Republican Party and then to elect a conservative Republican president sometime in the future.  When Ronald Reagan started his political career in 1964 with a famous nationally televised speech for Goldwater, and then went on to be elected president in 1980, it seemed that Jaffa and other Goldwater Republicans had succeeded.

Jaffa's greatest contribution to the American conservative movement was in adding intellectual depth to conservative thought by showing how conservatism could be understood as part of an intellectual tradition that included Aristotle, John Locke, the American constitutional framers, and Lincoln.  The appeal to Lincoln was controversial among some conservatives and libertarians who saw Lincoln as contributing to the tradition of liberal progressivism.  John Barr has covered this debate over Lincoln very well in his book Loathing Lincoln.

I am especially saddened by Jaffa's death, coming as it does after the deaths in recent years of Joseph Cropsey and George Anastaplo, because these were the three men who gave me some connection to Strauss, whom I never met.  Unfortunately, there was a break between Jaffa and Cropsey during their final decades of life, because of some personal disputes, despite their having been friends from childhood in New York.  Jaffa and Anastaplo remained close throughout their lives, despite the fact that Anastaplo did not share Jaffa's commitment to American political conservatism.

It was a privilege for me to become one of Jaffa's friends.  I remember well when he invited me to lecture on Darwin and evolutionary ethics at Claremont McKenna College in 1987, and I stayed at his home. 

I am also reminded of his generosity in writing publishing blurbs for two of my books.  For Darwinian Natural Right, he wrote: "Larry Arnhart is at the cutting edge of the frontiers of political philosophy today.  His book on Aristotle and Darwin crowns more than a decade of research on the biological foundations of human nature.  He has shown that it is no longer possible to assume that our biological nature is unrelated to our moral nature.  He has therefore gone a long way to restoring the credibility of 'the laws of nature and nature's God,' and of the political science upon which this nature was founded."  For Political Questions, he wrote: "This is a brilliant adaptation of Thomas Aquinas's technique of the disputed question.  Thomas called the Summa Theologica a textbook for beginners, although one may wonder how many beginners ever mastered more than a small portion of it.  Larry Arnhart's book really is for beginners in political philosophy, the best--I think--that there is today.  It is not a substitute, but an encouragement and aid to reading these books, and to thinking about those questions.  And it is profitable equally to our beginning students and to those of Thomas Aquinas."

Jaffa was especially generous in writing these endorsements, because he was skeptical about my commitment to Darwinian evolutionary science.  But as is suggested by what he said about Darwinian Natural Right, he shared my belief that natural right can be rooted in human biological nature.

Some related posts can be found here, here, here, here, here, and here.

Sunday, January 11, 2015

The Political Origins and Evolutionary History of Banking Crises

                                                     The Bank of Scotland in Edinburgh

It is surprising that in all of the discussion of the 2007-2009 financial crisis, there has been little attention paid to a remarkable and puzzling fact:  since 1840 the United States has had 12 major banking crises, while Canada has had none! 

Charles Calomiris and Stephen Haber--in their new book Fragile by Design: The Political Origins of Banking Crises and Scarce Credit (Princeton University Press, 2014)--explain this as a consequence of the differences in the political institutions of Canada and the United States, which supports their general argument that whether a banking system is stable or fragile is the result of political choices in what they call the Game of Bank Bargains.  It also supports their claim that banking crises arise not from market failures but from winning political coalitions designing fragile banking systems that provide short-term benefits--wealth and power--for members of the coalition at the expense of the long-term public good.  Every banking system is created by political deals, and those deals are guided by the logic of politics, not the logic of the market.

They present their reasoning through a coevolutionary history of states and banks engaged in a Darwinian struggle for survival with competing states and banks (16, 60-61, 73-83, 85, 93, 105-106, 492).  The mutual dependence of states and banks is shown by the fact that every nation-state today has some form of government-chartered bank, and by the fact that nation-states and chartered banks have both emerged since 1600, because the creation of the modern state has depended upon the cooperation of rulers, merchants, and financiers.  Merchants have needed states to enforce their contracts and defend their trading routes.  Rulers have needed merchants to build the domestic and international networks of commerce that sustain the state's economy and its imperial power.  Merchants have need financiers to create and manage complex financial instruments.  Rulers have needed financiers to provide the funding for the wars required for building a modern state.  And the financiers have needed the state to enforce their financial contracts.  The chartered bank pulls all three groups together in exchange for the lucrative special privileges provided by the state.

In 1694, King William and Parliament founded the Bank of England as a joint stock, limited liability company that would have a monopoly in lending money to the British government.  No other banks in England were allowed to take the form of a joint stock, limited liability company.  All other banks had to be organized as partnerships, and they were limited to six members.  From 1689 until the defeat of Napoleon in 1815, England fought a series of expensive wars with France; and the Bank of England was designed to provide the finance for those wars.  The bank's charter was renewed nine times between 1694 and 1844, and each time the bank provided the government a low-interest or no-interest loan.  The British government defeated its military rivals because it was able to borrow more money than they could and at lower rates of interest.  After the defeat of Napoleon in 1815, Great Britain was the only world power.

In contrast to the English banking system, a different kind of system developed in Scotland.  "The Scottish system," Calomiris and Haber observe, "came to represent the very model of competition, innovation, accessibility to credit for the private sector, and stability--all the things the English banking system could have been but was not" (101).  The Scottish banking bargain was very different from the English bargain.  The fundamental difference was the free chartering of banks in Scotland and the free competition among the banks.  There were three specially chartered banks: the Bank of Scotland (1695), the Royal Bank of Scotland (1727), and the British Linen Company (1746).  There were also many provincial banks freely chartered under common licensing rules.  The banks were free to open branches, and these branches could be opened in remote locations with a lower overhead cost than opening a completely new bank.  These branching banks provided a broad access to credit.  Because of their greater size, competitiveness, and diversification of risk, the Scottish banks had lower rates of failure than the English banks.

Canadian banks have also had low rates of failure, and one of the reasons is that like the Scottish banking system, the Canadian system has been based on a nationwide network of branching banks.  A large bank with many branches benefits from economy of scale, from diversifying its risks, and from being able to shift funds across regions in response to differences in demand.  Canada's constitution of 1867 established a federal system in which the central government made economic policy and a monopoly on the right to charter banks.  This led to a banking system with a few large chartered national banks having many branches, which brought efficiency and diversification of risk.

Calomiris and Haber show how the rules for the Game of Bank Bargains in the United States have differed from those in England, Scotland, and Canada.  In the history of bank bargains in the United States, they indicate, there have been three periods with three different dominant coalitions. 

In the first period, from the Revolutionary War to the early decades of the nineteenth century, a coalition of political elites at both the state and national levels, under the intellectual leadership of Alexander Hamilton, established a banking system to finance the revolutionary war and then the new government under the Constitution.  The Continental Congress created the first chartered bank, the Bank of North America, in 1781.  This was a privately owned commercial bank that had a special relationship with the government as its fiscal agent, and it provoked opposition from local banks without charters and from critics who challenged the special privileges of the national bank as the product of a corrupt political bargain.  In 1791, the new central government established the Bank of the United States to replace the Bank of North America.  This was a private commercial bank, owned and operated by wealthy Federalist financiers, that made the federal government a shareholder, while also giving loans to the government that were repaid through the dividends the government received as a shareholder.  In exchange for this financing of the government, the bank received lucrative privileges from its government charter--including limited liability for its shareholders, the right to hold federal government deposits, and the right to open branches across the country.  No other banks had such privileges.  State governments, however, exercised the power to charter banks within each state; and they could model their charters on that of the Bank of the United States.

The second period in the history of banking in the United States began in 1836, with the closing of the Second Bank of the United States, after Andrew Jackson had stopped its rechartering in 1832.  The new dominant coalition controlling bank chartering and bank regulation was composed of small unit bankers and agrarian populists.  Under a system of free banking, individuals could open banks by registering with the state comptroller, and they did not need a charter from the state legislature.  But this was not a completely open access system, because banks were not permitted to branch in most states, and this limited the entry of banks in sparsely populated areas due to the high overhead costs of opening a bank.  Thus, this became a system of segmented monopoly banking.  In this system of unit banking, local banks were tied to the local economy, so that bankers were more inclined to provide credit in difficult times, since they could not move their funds to different locations.  And consequently farmers had a special interest in preserving restrictions on branching, because local unit bankers would tend to continue issuing credit to farmers during economic downturns.

The banking system supported by this coalition of small bankers and agrarian populists was remarkably unstable.  From 1800 to 1907, there were 11 major banking crises.  After the panic of 1907, a group of bankers and government officials convened as the National Monetary Commission to formulate proposals for reforming the banking system.  They identified the unit-banking system as the primary problem, and they pointed to the stability of branch-banking systems like that in Canada.  But since the dominant banking coalition was too powerful to allow fundamental reform, they proposed the creation of a new central bank that could make loans to banks that were under stress.  Here they were following the example of the Bank of England that has become (since the middle of the 19th century) the lender of last resort in the British financial system. This led to the establishment of the Federal Reserve System in 1913.  But this did not prevent a massive wave of bank failures from 1920 to 1933.

The Glass-Steagall Act of 1933 established federal deposit insurance, and it is common for high school American history books to assert that this was necessary to save the American banking system.  In fact, this was the product of lobbying by unit bankers to support their banks and protect themselves from the competition coming from branch banking.  In the 1920s, those states that experimented with deposit insurance found that this made their banking systems unstable, because when depositors were insured, they had less incentive to worry about the riskiness of banks, which encouraged imprudent lending that led to bank failures.

In the 1980s, the unit banker-agrarian populist coalition was weakened by various demographic, technological, and economic changes, which eventually initiated a third period of American banking history with a new dominant coalition of megabanks and urban activist groups.  This shift became clear when the Congress in 1994 passed the Riegle-Neal Interstate Banking and Branching Efficiency Act, which allowed banks to branch both within states and among states.  This brought about a series of mergers and acquisitions creating megabanks (like JPMorgan Chase and the Bank of America) with nationwide branches.

But if this system of interstate branch banking is more stable than local unit banking, as Calomiris and Haber argue, then we have to wonder why the Great Financial Crisis of 2007-2009 occurred.  The common answer is that this resulted from an excess of "deregulation" and a foolish optimism that free markets without government regulation could regulate themselves.  Not only were banks free to merge and open branches across the country, they were also freed from restrictions on interest rates on deposits; and they were freed from restrictions that had separated commercial banking from investment banking. 

In 2011, the Financial Crisis Inquiry Commission created by the Congress issued its final report with its conclusions about what had happened in the Great Financial Crisis.  The report declared:

"We conclude widespread failures in financial regulation and supervision proved devastating to the stability of the nation's financial markets.  The sentries were not at their posts, in no small part due to the widely accepted faith in the self-correcting nature of the markets and the ability of financial institutions to effectively police themselves.  More than 30 years of deregulation and reliance on self-regulation by financial institutions, championed by former Federal Reserve chairman Alan Greenspan and others, supported by successive administrations and Congresses, and actively pushed by the powerful financial industry at every turn, had stripped away key safeguards, which could have avoided catastrophe." (xviii)

And yet immediately after this passage, the report declares that regulators--in the Securities and Exchange Commission, the Federal Reserve, and other agencies--had all the regulatory authority necessary to protect the financial system, but they chose not to use it.  In particular, policy makers and regulators "could have stopped the runaway mortgage securitization train," but they chose not to do this.

This points to what seems to have been the primary stimulus for the financial crash--the crisis in housing finance, and especially the market for "subprime loans."  To understand this, Calomiris and Haber contend, we have to understand the new U.S. bank bargain that was struck through the influence of the coalition of megabanks and urban activists.  There are two conditions for banking crises.  Banks must make too many risky loans.  And they must maintain too little capital to protect themselves against losses from those risky loans.  This political coalition supported policies that promoted both of those conditions.

Under the Community Reinvestment Act (CRA) of 1977, banks were required to serve their communities.  Various activist groups interpreted this to mean that banks should expand their mortgage lending in poor and inner-city neighborhoods where many borrowers would not be able to satisfy high standards to qualify for the loans.  Bankers seeking mergers to create megabanks needed the mergers approved by the Federal Reserve Board, and the Board could be swayed by evidence that a bank had contracted with activist groups to channel credit to low-income communities.  Then, under pressure from activist groups, Congress began to put mandates on the Federal National Mortgage Association (Fannie Mae) and the Federal Home Loan Mortgage Corporation (Freddie Mac)--two government sponsored enterprises--to repurchase mortgage loans made targeted groups.  To meet these demands, Fannie and Freddie had to weaken their underwriting standards.  They began buying mortgages for people with weak credit scores, little or no documentation of income, and down payments of 3 percent or less.  In 2006, 46% of first-time home buyers got mortgages with no down payment at all.  These weaker standards were then extended to everyone applying for a loan, so that the American middle class was drawn into the dominant coalition, because now middle-class borrowers could qualify for more luxurious homes than they could afford to pay for, with the expectation that rising housing prices would protect them from financial disaster.

The analysis by Calomiris and Haber is guided by their Public Choice perspective--by their assumption that people generally pursue their self-interest, and that in a democracy, people form political coalitions to serve their selfish interests by getting politicians, who are motivated by their self-interest in reelection, to favor policies in the interest of the coalition, although it is contrary to the long-term public interest.

If that is true, then it is unlikely that good ideas about why banking crises occur--like the ideas conveyed by Calomiris and Haber--will ever lead to reforms to make banking crises less common and less severe.  Therefore, Calomiris and Haber leave us with a deeply pessimistic lesson: "readers should not expect politicians or regulators to do much to prevent the next banking crisis" (281).

There seems to be two ways to escape from the fatalistic pessimism of Calomiris and Haber.  One is to look to cases like the Canadian banking system as a model for reform.  They never explain clearly why this can't be done, except to suggest that the institutional changes required to do this would be unfeasible.

Canada seems to show that the political Game of Bank Bargains can create a good banking system if the political institutions allow this.  And Calomiris and Haber recognize this (499).  But another possibility is to eliminate the political Game of Bank Bargains completely by moving from central banking to free banking.  There has long been a debate between the relative merits of central banking and free banking, and libertarians have argued that in a totally free market, every individual would possess the right to become a banker and make his own banking policies.  The only role of government would be to enforce financial contracts and punish fraud.  (This debate has been surveyed in Vera Smith's The Rationale of Central Banking and the Free Banking Alternative [Indianapolis: Liberty Press, 1990].)

Although it's hard to find clear historical examples of a fully free banking system, Scotland comes closest to the libertarian ideal.  Relying on Lawrence White's Free Banking in Britain: Theory, Experience, and Debate, 1800-1845 (Cambridge University Press, 1984), Calomiris and Haber agree with White that Scotland's free banking system was remarkably efficient and stable (103, 112-14, 169-71, 302-303).  (Calomiris and Haber are silent about Murray Rothbard's argument that White is mistaken, and that Scotland was not really a free banking system at all.)  But in their criticism of "libertarian utopianism," they insist that the Scottish banking system depended on "very special circumstances" that are unlikely to be repeated elsewhere (491).

They also argue that the libertarian understanding of the ideal state as confined to "providing defense and enforcing voluntary contracts under a clear rule of law" ignores the Darwinian historical reality that states without governmentally created banks have not been able to survive.  They explain:
"Throughout history, military and economic competition among states has been a driving force in bank chartering and regulation.  The fact that organized violence is a key function of the state has been the single most important reason that states have needed to charter and control banks.  The narrow conception of the state that omits an activist role for government in the shaping of banking errs in two fundamental ways.  First, as a matter of history, it ignores the central and necessary role of government in creating effective banking systems.  The world of Renaissance banking was one of perennial credit scarcity.  Second, it ignores the ineluctable logic of how--for better or worse--governments must create and allocate power: any government choosing to forbear from using banks as a tool to gain military and economic advantages would soon be replaced by a stronger government that did.  Like it or not, banking policy will always be a powerful tool of statecraft.  To narrowly conceive of the state as only a courtroom in which laws are enforced is to ignore the political foundations of all laws and the military and economic foundations of the competition among political regimes." (492)
But doesn't this beg the question of whether central banking is the only possible way to finance a strong government, or whether the efficiency and stability of a free banking system could generate abundant credit for government?

Monday, January 05, 2015

The Evolution of Private Property Anarchism

Although Adam Smith wants a limited government, it is clear that Smith thinks that the "system of natural liberty" does require some governmental activity to provide the military defense, the administration of justice, and the public goods that cannot be provided through private activity and free markets.  He is not an anarchist.  But some of those people who agree with Smith about the need for a "system of natural liberty" in a commercial society think that any government inevitably expands its powers to the point that it threatens individual liberty; and so the only way to secure liberty is to abolish government, and to allow social order to evolve spontaneously through private property anarchism.

  Surely Smith would say that government secures the life, liberty, and property of individuals; and without that security for individual rights, the system of natural liberty would be impossible.  A free society could not exist in a condition of anarchy without government.

          But is government necessary to provide this security for individual rights?  Private property anarchists—such as Murray Rothbard and David Friedman--say no (Stringham 2007).  Smith shows us that our needs for goods and services are best satisfied through private exchanges in free markets.  So why can’t we satisfy our need for a secure social order in the same way?   If social order can arise spontaneously as an unintended order through the interaction of individuals acting for their individual interests, without the need for a centrally planned and executed design by a guiding intelligence, which seems to be the point of Smith’s market model of human life, then why can’t we have an anarchistic society without government, without a central political authority? 

          Most of us would answer that a social order without government is impossible, because without government, society collapses into disorder or chaos.  Most of us, then, agree with Hobbes that anarchy—human life without government—is a war of all against all.  But is that really true?

          To see the possibility, desirability, and even obviousness of anarchy, John Hasnas has argued, all we have to do is look around and look back (Hasnas 2008).  We need to look around our world today and see the many ways in which law and order arise spontaneously without government.  And we need to look back in history to see how anarchic order without government has emerged throughout human evolutionary history.

          For most of our history, we have lived in anarchic societies without government, because for hundreds of thousands of years, our evolutionary ancestors lived as nomadic hunter-gatherers who lived without any formal governmental institutions.  Anthropologists who have studied hunter-gatherers in the twentieth century have shown how they sustain social order with customary legal rules based on bonds of kinship and reciprocity and the arbitration of conflicts by men who have the reputation for trustworthy judgment.  Any troublesome offender against the customary norms could be ostracized and expelled from the community or killed. 

          Private property anarchists like Bruce Benson have cited this research as proof that law originated in anarchy as customary law voluntarily accepted by all the individuals of a society who saw the benefits of peaceful cooperation.  Only much later in human history did law appear as authoritarian law imposed from above by some coercive governmental authority (such as a king, a legislature, or a supreme court).  The same kinds of customary legal systems found in primitive societies can be found in more complex societies, such as medieval Iceland, Anglo-Saxon England, medieval Europe, the American West of the 1800s, and even in modern commercial societies today (Benson 1990, 2007).

David Friedman has pointed to medieval Iceland during the period of the "Free Commonwealth" (930-1262) as one of the best examples of civilized anarchy (Friedman 2007).   In the second half of the ninth century, King Harald Fairhair unified Norway under his rule.  Some of his people fled his rule and found their way to Iceland, where they established a social system based on Norwegian traditions, but without a king or any centralized executive authority.  The only centralized authority in Iceland was an assembly of local chieftains who represented their assemblymen.  Every assemblyman was attached to a chieftain to whom he paid a fee. The chieftaincy was private property that could be bought and sold.  The assemblymen could change their allegiance without changing their residence, so the chieftaincies were not based on territory.  This freedom of assemblymen to move from one chieftaincy to another (along with their fees) created a free competition between chieftains so that chieftains had an incentive to serve their assemblymen.  The legal system worked largely through private enforcement based on arbitration.  Victims initiated prosecution of offenders.   Victims (or their survivors) could agree to a settlement with offenders.  Or cases could be settled by arbitration.  If offenders were convicted in court, the judgment would be a fine to be paid by the defendant to the plaintiff.  If a convicted defendant refused to pay the fine, he could be declared an outlaw, and anyone was free to kill him.

This system worked well for almost 300 years until 1230.  By then, six large families had gained control of most of the original chieftaincies, and the competition between these led to civil wars.  Once the rich farmers grew frustrated with the disorder of the civil wars, they accepted the invitation of the King of Norway to become part of his kingdom in 1262.

Another example of anarchic law cited by Benson and others is Anglo-Saxon England (from the end of Roman occupation in 410 to the Norman Conquest in 1066) (Benson 1990, 2007).  People joined voluntary groups of one hundred men or households that settled disputes and enforced customary law.  What in a modern legal system would be considered “crimes” against the state were treated in Anglo-Saxon law as private torts, and private parties settled disputes without government.  Offenders were required to pay restitution to their victims.  Offenders who refused to pay were treated as outlaws outside the protection of law.  This system of voluntary and customary law was weakened when the Anglo-Saxon kings expanded their power through the concept of the “king’s peace.”  Crimes were declared to be violations of the “king’s peace,” and criminals had to pay restitution to the king, which increased the king’s revenue.  After the invasion of 1066, the Normans expanded the scope of the “king’s peace” even more.

A prime example of anarchic law in medieval Europe is the Law Merchant (lex mercatoria) (Milgram, North, and Weingast 2007; Benson 1990, 30-36).  As commercial trade increased in Europe in the eleventh and twelfth centuries, merchants needed an international commercial law to regulate their commercial transactions.  The merchants themselves set up private courts to settle disputes and develop customary laws for commerce.  Merchants recognized the mutual gains from exchange facilitated by this voluntary law, and those merchants who refused to accept this law were excluded from trade.  This Law Merchant provided the basis for modern international commercial law.  This was all done by private groups without government.

A similar kind of anarchic voluntary law emerged in the American West in the 1800s (Anderson and Hill 2004, 2007).  Contrary to the popular image of the early American West as lawless and violent,  Terry Anderson and Peter Hill have shown that the wild West was not really so wild, because people formed voluntary organizations to enforce customary norms that protected private property and facilitated peaceful cooperation.  From 1830 to 1900, although they were officially under the authority of government agencies, many areas of the American Western frontier were beyond the reach of government.  In this anarchic situation, customary law was enforced by private protection agencies, vigilantes, cattlemen’s associations, mining camps, and wagon trains.

Today, in the United States and other modern nations, most people assume that the anarchy of past history has disappeared, and now law and order depend on the formal institutions of government exercising coercive authority—legislatures, executive officers, courts, bureaucrats, and police.  Hasnas and other private property anarchists insist, however, that if we look around, we can see anarchic law in action as private individuals and organizations formulate and enforce voluntary law without any dependence on governmental authority (Hasnas 2008, Benson 1990). 
We should notice that there are more private police in the United States than public police.  In shopping malls, gated communities, business offices, schools, and churches, we see privately employed security guards and police agencies, because the public police are unreliable.  There were no public police in the United States at all until the 1840s.  The New York City police department was not created until 1845.

National and international commercial law depends mostly on private mediation and arbitration services.  Business contracts usually contain provisions agreeing that disputes will be settled by some specified arbitration service or court.  Businesses, universities, homeowner associations, and religious groups all have their own private regulations and judicial procedures for settling disputes. 

Most of the Anglo-American common law that governs social life in Great Britain and the United States arose originally through an evolutionary process of spontaneous order in which customary law developed through the settlement of actual disputes.  Tort law, property law, contract law, commercial law, and criminal law all arose in this way.  Most people assume that government had to create these laws through statutory legislation.  But what really happened is that much of the common law that arose originally as customary law was codified through legislation.   Common law was not created by the deliberate design of those in governmental offices to serve some intended end.  It was created by the interaction of innumerable individuals over centuries who were looking for ways to settle disputes that would reduce violence and increase cooperation.  This was an anarchic system of law because it arose through the voluntary agreement of individuals rather than the coercive authority of government.

That’s the argument of the private property anarchists.  Would Smith agree with them?  We might think that he should agree with them insofar as they are extending his market model for the spontaneous evolution of order to explain the evolution of legal order without government.  But as we have seen, Smith believed that even the system of natural liberty would need government to perform its three duties—military defense, administration of justice, and public goods.  So, for Smith, the power of government should be limited but still essential.  It seems that Smith is a limited government liberal, not a private property anarchist.

Smith might agree with the anarchists about primitive societies being anarchic, with customary law but no government.  As we have seen, Smith sees the history of society as moving through four stages—the age of hunters, the age of shepherds, the age of agriculture, and the age of commerce.  Government first arises in the second stage, when disputes over property make government necessary.  But when human beings live by foraging—hunting wild animals and gathering wild plants--there is no need for government, since disputes can be settled by informal social authority (WN, 689-90, 708-15).  But in at least one passage of The Wealth of Nations, Smith suggests that even hunting-gathering bands are governed by “chiefs” who act as judges in peace and leaders in war (783).

The reason for this confusion is that while primitive foragers can live in “stateless societies,” as anthropologists today would call them, because there is no formal institutional structure of centralized coercive authority that would constitute a “state,” there is, nonetheless, some informal and episodic social ranking in which some individuals act as leaders in arbitrating disputes or fighting in war.  Whether this is anarchy depends on how one defines anarchy.  If anarchy means a society without the centralized government of a state, then this is anarchy.  But if anarchy means a society without any kind of governance, then this is not anarchy; and anarchy has never existed in any social order.  Some of the private property anarchists have conceded that a society without governance is impossible, and that what they are identifying as anarchic societies are societies with self-governance, but without a centralized coercive state  (Hasnas 2008, 112).

In all of the examples of anarchic legal systems presented by the private property anarchists, one can see some structure of governmental authority in which some people exercise leadership.  Benson points to the Kapauku Papuans of West New Guinea as an example of a primitive society living in anarchy without government.  And yet they do have a leader or headman that they call tonowi, which means “the rich one.”  He is a person who has earned the respect of others who voluntarily choose to follow him because he is generous, honest, and has good judgment.  His authority was based on persuasion rather than coercion.  He could even change the customary laws through his own deliberate design, as long as his followers voluntarily accepted the change (Benson 2007, 629-30, 632-34). Although Benson claims that Kapauku society had no government at all, this leadership by a headman looks like government.

Similarly, while the “Free Commonwealth” of medieval Iceland was stateless—in the sense that it did not have a centralized bureaucratic state apparatus—it still have political rule.  It was a chiefdom, but with multiple competing chieftains.  So what we see here is not the absence of government, but rather the freedom from tyranny that can come from a system of decentralized, limited government.  Jesse Byock, one of the leading scholars studying medieval Iceland, identifies the “Free Commonwealth” as a “decentralized government” (Byock 2001, 94).

Likewise, the making and enforcement of Anglo-Saxon law was highly decentralized, and yet there was government.  Kings always existed, and they could be called upon to help victims of violence who were not strong enough to enforce restitution from a guilty offender.  Kings were war leaders, and they expanded their power through centuries of warfare (Benson 1990, 26-27; Benson 2007, 542-43).  Anarchists have a hard time explaining how military power can be organized without governmental authority.

While private property anarchists have pointed to the evolution of customary law on the American western frontier as an example of anarchy, Anderson and Hill concede that the early American West was “not completely anarchistic” because government agencies “were always lurking in the background” (Anderson and Hill 2007, 639, 642).  The same could be said about all of the examples of private customary law that anarchists see in modern commercial societies:  they all appear under the shadow of government, because people know they can appeal to governmental institutions if private law fails to satisfy their needs.

Anderson, Terry L., and P. J. Hill, The Not So Wild, Wild West: Property Rights on the Frontier (Stanford, CA: Stanford University Press, 2004).
Anderson, Terry L., and P. J. Hill, "An American Experiment in Anarcho-Capitalism: The Not So Wild, Wild West," in Stringham, Anarchy and the Law, 639-57.
Benson, Bruce, The Enterprise of Law: Justice Without the State (San Francisco: Pacific Research Institute for Public Policy, 1990).
Benson, Bruce, "Are Public Goods Really Common Pools? Considerations of the Evolution of Policing and Highways in England," in Stringham, Anarchy and the Law, 538-64.
Benson, Bruce, "Legal Evolution in Primitive Societies," in Stringham, Anarchy and the Law, 624-38.
Friedman, David, "Private Creation and Enforcement of Law--A Historical Case," in Stringham, Anarchy and the Law, 586-601.
Hasnas, John, "The Obviousness of Anarchy," in Roderick Long and Tibor Machan, eds., Anarchism/Minarchism: Is Government Part of a Free Country (Burlington, VT: Ashgate Publishing, 2008), 111-31.
Milgram, Paul, Douglass North, and Barry Weingast, "The Role of Institutions in the Revival of Trade: The Law Merchant, Private Judges, and the Champagne Fairs," in Stringham, Anarchy and the Law, 602-23.
Smith, Adam, The Wealth of Nations, 2 vols. (Indianapolis: Liberty Fund, 1981).
Stringham, Edward, ed., Anarchy and the Law: The Political Economy of Choice (New Brunswick, NJ: Transaction Publishers, 2007).

Sunday, December 28, 2014

The Biopolitical History of the Gombe Chimpanzees

          Ferdinand Has Been the Leader of the Kasekela Community of Chimps since March 2008

Titan is the Biggest and Strongest of the Kasekela Chimps, and He is Ranked as Number 3

    Gombe National Park and the Borders of the Mitumba, Kasekela, and Kalande Communities

In the August 2014 issue of National Geographic, there's an interview with Jane Goodall about the early days of her study of the chimpanzees in Gombe National Park in Tanzania.  Her work there began in July of 1960.  Then, in 1962, went on leave to Cambridge University to earn a Ph.D. in ethology (animal behavior).  She was told by her academic advisors that she was doing everything wrong.  She had given each chimpanzee a name, and she had recorded anecdotes that displayed their individual differences in their personalities--in their emotions, motivations, and thoughts.  All of this was condemned as unscientific, because it was not reductionistic in reducing everything to patterns of data, and because speaking about animals as having individual personalities was regarded as an anthropomorphic projection of uniquely human traits onto nonhuman animals.

As is indicated by the title of her magnum opus--The Chimpanzees of Gombe: Patterns of Behaviour (1986)--she did collect and analyze data from her research so as to show behavioral patterns, but she also told stories about the chimpanzees of Gombe as unique individuals with different personalities.  So, for example, she could show patterns of dominance behavior with both male and female dominance hierarchies; and she could predict that every chimpanzee community will tend to have such a structure of social ranking.  But she could also show that individuals were highly variable in their propensities to dominance.  Some individuals were by temperament more ambitious than others.  And it was not possible to predict precisely which individuals would become alpha male or alpha female, because this depended on the historical contingencies of unique individuals interacting in unique circumstances.  Only after a power struggle had brought one individual to the top could the scientists at Gombe retrospectively reconstruct what had happened.

Similarly, once the Kasekela Community had conquered the Kahama Community in a war and annexed its territory--a war that began in 1974 and ended in 1978--Goodall could reconstruct the history of what had happened.  But the war was a shocking surprise to her, because she had been reporting that the chimpanzees did not kill members of their own species.  Now she and her colleagues can see that there are three communities in Gombe, named for three rivers in their territories, with clear borders between them: the Mitumba Community in the north, the Kasekela Community in the middle, and the Kalande Community in the south.  Males form border patrol parties, and if they find that they outnumber their opponents, they will viciously attack and sometimes kill members of other communities. 

Goodall and colleagues such as Richard Wrangham and Michael Wilson argue that we can see here the evolutionary roots of warfare among our primate ancestors.  As I have indicated in various posts, this has set off an intense debate with those dispute the evidence for warfare among chimpanzees and human foragers.

Like all social sciences, ethology or animal behavior is a historical science that studies the unique history of unique individuals in unique communities.  In such a historical science, one can predict general patterns; but one cannot precisely predict the future.  And so, as I have argued, a biopolitical science will have to be a science of the political history of primates that will include ethological science.  I disagree, therefore, with those proponents of biopolitics like John Hibbing who assume that a biological science of politics cannot include ethological history.

The research at Gombe is now directed by Anne Pusey as director of the Jane Goodall Institute for Primate Studies at Duke University.  This research allows us to see the continuing history of the Gombe chimps.

Much of this history is presented in a beautiful book of photography with extended commentary--Tales from Gombe by Anup Shah and Fiona Rogers (first published by the Natural History Museum of London and then by Firefly Books, Buffalo, New York, in 2014).  Shah and Rogers explicitly identify themselves as photographers rather than scientists, and so their writing does not have scientific rigor.  But they do tell engaging stories.  And their photographs are stunning.

For more rigorous studies, one needs to turn to the work of Pusey and her colleagues--for example, Emily E. Wroblewski, Carson M. Murray, Brandon F. Keele, Joann C. Shumacher-Stankey, Beatrice H. Hahn, and Anne E. Pusey, "Male Dominance Rank and Reproductive Success in Chimpanzees, Pan troglodytes schweinfurthii," Animal Behaviour 77 (2009): 873-885.

Shah and Rogers continue Goodall's emphasis on the unique personalities of the chimpanzees and their unique life histories.  In fact, their photographs are their most convincing evidence for this, because we can see the different characters and emotional temperaments displayed in their faces.  This continues a tradition that goes back to Darwin's Expression of the Emotions in Man and Animals (1874).

Like Goodall, Shah and Rogers show that becoming the male leader is not determined just by size and strength.  Since March of 2008, the male leader has been Ferdinand, who has formed an alliance with his elder brother Faustino, who is ranked number two.  The biggest and strongest chimp is Titan, but he is only number three.  Titan is ambitious, but he lacks the confidence, the mental stability, and the shrewdness of judgment displayed by Ferdinand.

Shah and Rogers cannot predict who will be the next alpha male.  They speculate that if Ferdinand were to be weakened by illness, or if Faustino were to remain neutral in a fight between Ferdinand and Titan, this might give Titan the opportunity to take the number one position.  But they admit that knowledge of past history is not enough to predict a future that depends on contingencies.

Shah and Rogers might also have mentioned another source of historical contingency--the cultural diversity in chimpanzee politics.  Every chimpanzee community has a unique political culture as shaped by its unique history.  Consequently, as I have often suggested on this blog, a biopolitical science would have to move through three levels of evolutionary history--the natural history of the political species, the cultural history of the political community, and the individual history of political agents.

What exactly is the benefit of becoming the politically dominant male?  One obvious possibility from a Darwinian perspective is that higher male dominance rank gives greater access to females and thus reproductive success.  But it has been hard to test this in the wild.  Pusey and her colleagues determined the paternity for 34 offspring over a 22-year period for the Gombe chimpanzees, and they concluded that male reproductive success did generally come from dominance rank creating priority of sexual access.  But they also saw that lower-ranking males sired more offspring than predicted.  They write: "our study confirms that male rank generally correlates with reproductive success.  However, younger males had the highest success per male, and low-ranking males successfully produced offspring more often than was predicted by the priority of access model.  Low-ranking fathers sired offspring with younger, less desirable females and appeared to use the consortship strategy more often than higher-ranking fathers" (880).

I see no reason why such scientific study of chimpanzee politics should not be part of political science and political philosophy.  A biopolitical science would fulfill an intellectual vision originating with Aristotle.  Although Aristotle did not identify chimpanzees as political animals, he did study them carefully and identify them as the animals most similar to human beings.

Aristotle also recognized that nonhuman animals had unique personalities and cognitive abilities.  He observed: "in a number of animals, we observe gentleness or fierceness, mildness or cross temper, courage or timidity, fear or confidence, high spirit or low cunning, and, with regard to intelligence, something equivalent to shrewdness" (History of Animals, 8.1).

Some posts on related topics can be found here, here, here, here, here, here, and here.

Friday, December 19, 2014

Markets and Morals (2): The Roots of Human Sociality Project

There is nothing inherent in market exchange that make markets morally corrupting.  Actually, one might argue, markets depend on morals.  Voluntary exchange in markets requires trust and a sense of fairness.  Before you deal with strangers, you have to trust that they won't cheat you.  You have to trust that your property is secure.  You have to trust that social norms of fairness and the rule of law will enforce contracts, protect your property from confiscation, and keep banks sound.  You have to trust that the legal system will punish violence, fraud, and corruption.

The modern commercial society did not arise in the modern world until the development of the moral infrastructure of the bourgeois virtues.  When those bourgeois virtues are absent or weak, markets fail to work.

Our recent experience with the global financial crisis illustrates the failure of markets without trust and fairness.  Financial markets are built on trust in promises.  In primary financial markets, borrowers sell their promises to repay their debts to lenders.  In secondary financial markets, investors buy and sell these promises.  The problem is that once these promises are made tradable, the networks of trust are weakened, and it becomes hard to judge the trustworthiness of the promises.  Imprudent risk-taking, unscrupulous greed, and fraudulent deception can then lead to financial collapse.

If markets do depend on morals, then we should expect that societies with extensive market experience will show the moral norms of trust and fairness on which markets depend.  Experimenting with economic games is one way to test this.

Consider the research program that has come to be called "The Roots of Human Sociality Project."  Since 1997, about two dozen anthropologists and economists have been studying the evolution of prosocial norms by combining experimental economics and anthropological field ethnography in gathering evidence from 24 small scale societies around the world.  This research has been presented in a series of articles and two books (Henrich et al. 2004; Ensminger and Henrich 2014).

Much of neo-classical economics and classical game theory has been dominated by the Homo economicus model of human beings as rationally selfish maximizers of their utility.  But then in the 1980s and early 1990s, experimental economists discovered that the predictions of the Homo economicus model were not being fulfilled in the way people played the Ultimatum Game.  In this game, the experimenter provides some amount of money (say $10) for two players.  One player designated the proposer will propose a split of this money between the two players.  The other player designated the responder will respond by either accepting this proposed split or rejecting it.  If he accepts it, the money is split as proposed.  If he rejects it, neither player receives any of the money.  The prediction of the Homo economicus model is that the proposer will take most of the money for himself ($9) and offer the responder a small amount ($1), and the responder will accept this, because a small amount of the money is better than none at all.  In most cases, this is not what happens.  In most cases, the proposer offers to split the money in a proportion close to 50-50 ($5 for each); and the responder accepts.  When the proposer offers a smaller amount to the responder, the responder usually rejects the offer.  Apparently, responders are expressing their moral indignation against unfair offers.  This expression of moral sentiments in the Ultimatum Game has been shown to be correlated with activation of the brain's reward systems, which suggests that social norms of fairness have been internalized in the brain (Fehr and Camerer 2007; Sanfey 2007; Sanfey et al. 2003).

In 1995, Joseph Henrich was a graduate student in anthropology studying under Robert Boyd.  After hearing about the results in the play of the Ultimatum Game--mostly conducted with American undergraduate students--Henrich wondered how the game would be played by the people he was studing--the Machiguenga, who live in the Peruvian Amazon in small family-level groups that subsist on a combination of hunting, gathering, fishing, and slash-and-burn agriculture.  When he had them play the game that summer, he discovered that most proposers offered no more than 15 percent of the pot to responders, and that almost all of these offers were accepted.  So, in contrast to the American students, the Machiguenga were acting as rationally selfish maximizers, apparently confirming the Homo economicus model!

Boyd and Henrich decided that they should organize a large group of anthropologists and economists who would administer the Ultimatum Game to some small-scale societies around the world, representing a wide range of culturally diverse social organizations.  Twelve field researchers recruited subjects from fifteen societies.  The Ultimatum Game was played at each site.  At a few of the sites, the Dictator Game and the Public Goods Game were played.  All of the games are played anonymously.  The Dictator Game is played like the Ultimatum Game, except that the responder has no chance to accept or reject the offer.  The proposer dictates the split of the money, and so if he is generous, this must express his sense of fairness and not any fear of rejection.  In the Public Goods Game, the players are individually allocated some amount of money.  They can then contribute some of their allocation to a common pool, which is then increased and divided equally among all the players regardless of their contribution.  The selfish free-rider will not contribute anything to the common pool.

The fifteen societies were from twelve countries on four continents and New Guinea.  One was a purely foraging/hunting-gathering society (the Hadza of Tanzania).  Others included slash-and-burn semi-nomadic horticulturalists, pastoralists, and sedentary farmers.  The money allocated for each game was calculated to be the equivalent of an average day's wage for each society.  The studies of these societies were completed in 2000.

In 2002, a second phase of this project was started.  Four of the sites from the first phase were included in this second phase, and twelve new sites were added, including a group of Africans in a large city (Accra, Ghana) and a group of Americans in a small rural town in Missouri.

In this second phase, three games were played at every site: the Dictator Game, the Strategy Method Ultimatum Game, and the Third Party Punishment Game.  In the Strategy Method Ultimatum Game, the responder must say what his response would be to a range of possible offers from the proposer; and that determines the response once the proposer has made the offer, without the proposer knowing ahead of time what the response is going to be.  In the Third Party Punishment Game, two players are allotted a sum of money (the stake), and a third player gets half of this amount.  The first player must decide how much of the stake to give to the second player, with the second player making no decisions.  The third player must decide whether to pay 20 percent of his allocation to punish the first player across each of all possible offers.  This will measure the willingness of an individual to engage in costly third-party punishment to enforce social norms of fair behavior.  As in all of the games, the stake was set at one day's wage in the local economy, which meant that more money was involved in these games that is typically the case with the games using university students.

This experimental research supported four major findings (Henrich et al. 2014, pp. 131-33).  First, "fairness and punishment show both substantial variability and reliable patterns across diverse populations."  There is great cultural variability in human societies.  In all three experiments, there is substantial variability in the average offers and the willingness to punish low offers.  People in Western industrialized societies are very different from people in other societies, and so when undergraduate students in Western countries play these games, there is no reason to assume that their conduct shows a universal human nature. 

And yet this cultural variability does not mean that human beings are infinitely malleable blank slates on which culture can write just anything.  There is a clear pattern in this variation that shows how human culture is constrained by human nature.  Offers of 50 percent were always the most acceptable offer.  There were very few offers above 50 percent.  No society showed an average offer above 60 percent.  So there are no societies where most people give more than half, or where most people give zero.  The Hadza foragers were the most selfish people, but even they are not completely selfish.  In the Dictator Game, 71 percent of the Hadza offered more than zero.  On the other end of the scale, neither do we see completely other-regarding behavior.  In the play of the Dictator Game, only three individuals out of 427 offered 100 percent.  Hume and Smith were right in observing that human beings naturally show limited benevolence.

The second major finding is that "fairness increases with market integration."  Market integration was measured as the percentage of the average diet purchased in a market.  In all three games, the strength of fairness (making more equal offers) is correlated with increasing market integration.  The lowest average offers were made by the Hadza, a foraging band society in Tanzania with almost no market integration.  So it seems that markets promote morals by fostering social norms of fairness and cooperation.  A market society does not make people selfish, greedy, and amoral.

The third major finding is that "fairness increases with an individual's participation in a world religion."  In the societies studied, "world religion" means either Islam or some form of Christianity.  As opposed to the indigenous religions in some of these societies, Christianity and Islam teach that God is a powerful and moral divinity who punishes the bad and rewards the good.  This belief seems to reinforce social norms of fairness as reflected in the play of the Dictator Game and the Ultimatum Game.

The fourth major finding is that "willingness to engage in costly punishment increases with community size."  People from larger societies tend to punish more.  This is manifest in both second-party and third-party punishment.  This could explain the cultural evolution of social norms that made it possible--beginning about 10,000 to 5,000 years ago--for human societies to expand in size far beyond the small foraging bands that characterized most of human evolutionary history.  Those groups with social norms enforced by costly punishment could expand and outcompete those groups that lacked this cultural enforcement of group morality.

Henrich and his colleagues have argued that these findings of their experimental game project confirm the claims of Montesquieu, David Hume, and Adam Smith that more market-integrated societies foster social norms of fairness that facilitate expanded cooperation in the extended order of commercial exchange.


Jean Ensminger and Joseph Henrich, eds., Experimenting with Social Norms: Fairness and Punishment in Cross-Cultural Perspective (New York: Russell Sage Foundation, 2014).

Ernst Fehr and Colin F. Camerer, "Social Neuroeconomics: The Neural Circuitry of Social Preferences," Trends in Cognitive Science 11 (2007): 419-27.

Joseph Henrich, Jean Ensminger, Abigail Barr, and Richard McElreath, "Major Empirical Results: Markets, Religion, Community Size, and the Evolution of Fairness and Punishment," in Ensminer and Henrich, Experimenting with Social Norms, 89-148.

Joseph Henrich, Robert Boyd, Samuel Bowles, Colin Camerer, Ernst Fehr, and Herbert Gintis, eds., Foundations of Human Sociality: Economic Experiments and Ethnographic Evidence from Fifteen Small-Scale Societies (Oxford: Oxford University Press, 2004).

Alan G. Sanfey, "Social Decision-Making: Insights from Game Theory and Neuroscience," Science 318 (2007): 598-602.

Alan G. Sanfey, James K. Rilling, Jessica a. Aronson, Leigh E. Nystrom, and Jonathan D. Cohen, "The Neural Basis of Economic Decision-Making in the Ultimatum Game," Science 300 (2003): 1755-58.

Some posts on related topics can be found here, here,  here., here., here, and here.