David Christian's Maps of Time: An Introduction to Big History (University of California Press, 2004) is one of the most intellectually exciting books that I have read in recent years. As a historian specializing in Russian history, Christian taught for many years at Macquarie University in Sydney, Australia. Now he teaches at San Diego State University. In 1989, the faculty in the history department at Macquarie were discussing what kind of introduction to history the department ought to offer their students, and Christian remarked, "Why not start at the beginning?" Traditionally, modern historians have assumed that the "beginning" of history was about 5,000 years ago in Mesopotamia, where the first cities, states, and agrarian civilizations appeared, and writing was invented. Whatever happened before that was relegated to "prehistory." But Christian decided that a modern scientific view of history should allow historians to start at the real beginning of everything--the Big Bang, the origin of the universe, 13 billion years ago--so that human history would be understood properly as part of cosmic history. He called this "big history," and this book surveys the large patterns in this history from the Big Bang to the present, and with speculations about the future. (Daniel Lord Smail's Deep History and the Brain--the subject of a previous post--belongs to this same genre of historical writing.)
Those of us who take seriously the idea of liberal education as embracing all the intellectual disciplines need to look to Christian's book as a synthesis of knowledge in the natural sciences, the social sciences, and the humanities within the framework of universal history. This exemplifies what I have called "Darwinian liberal education," and what Edward O. Wilson has called "consilience"--the unity of all knowledge. Those of us striving to develop a biopolitical science should see Christian's book as a way of framing such a science by putting human political history within the broad sweep of cosmic evolutionary history. I foresee using Christian's book as a main text in my "Biopolitical Theory" class at Northern Illinois University.
After surveying the modern scientific understanding of the origins and history of the universe, the stars, the Earth, and life on Earth, Christian turns to human evolutionary history from the Paleolithic era to the present. He divides human history into three stages corresponding to three major eras of human social life: the foraging era (from 250,000 years ago to 10,000 years ago), the agrarian era (from 10,000 years ago to 1750 A.D.), and the modern era (from 1750 to the present). Christian has written a short summary of this human history for the first volume of the Berkshire Encyclopedia of World History, and this has also been published separately as a short book.
Of course, one of the obvious problems in such a broad view of everything is finding general patterns to sustain a grand narrative. The broadest pattern for Christian's history is taken from Eric Chaisson--the emergence of ever more complex levels of order requires structuring the flow of energy to sustain order against the entropic tendency of the second law of thermodynamics. Within that cosmic pattern, human history's three eras can be seen as three levels of ever more complex order that require ever more complex means for extracting energy to sustain ever larger human populations. Foragers extract energy through hunting wild animals and gathering wild plants. Farmers extract energy through domesticated plants and animals. In the modern era, humans have come to rely increasingly on fossil fuels as sources of energy. The ultimate source of this energy in plants, animals, and fossil fuels is sunlight.
Another general pattern in human history is set by the human capacity for "collective learning." Although many animals have some capacity for learning, human language enhances the human capacity for learning beyond anything seen in the rest of the animal world. This permits human beings to spread widely over varying niches in their ecosystems and even to create new niches. The human history from foraging to farming to modernity is largely the history of human collective learning as human beings learn new ways to extract energy from their environments to sustain ever larger populations of human beings in ever more complex societies--so that now human beings live in a global system that has transformed life on earth.
Another related theme of Christian's history is emergence: at each higher level of complexity, phenomena appear that could not have been predicted from knowledge of the lower levels. This supports my argument that E. O. Wilson's consilience by strong reductionism doesn't work if it ignores the emergent complexity by which novelty arises at higher levels of complexity that cannot be reduced to the lower levels. It is true, however, that these higher levels are constrained by the lower levels--so that, for example, even the highest levels of complex order in the universe are subject to the second law of thermodynamics and the need to extract energy to sustain order.
Like any good book on the biggest questions that human beings can ask themselves, Christian's will stimulate debate and disagreement. Although I generally find his book persuasive, I also find myself arguing with him about some points. Here I will point to only a couple of points of discussion.
As opposed to agrarian states, foraging societies do not have elaborate hierarchies of power. But I think Christian is mistaken to suggest that they have no hierarchies at all. Actually, he indicates in at least one passage that they may well have "personal hierarchies" (240). This is important because it becomes hard to understand how agrarian states could produce such complex hierarchies of rule if there were no natural disposition to status hierarchies at all. From my reading of the anthropological literature on foragers, it seems to me that although foraging societies are roughly egalitarian, this does not mean that there are no distinctions of rank at all. Some individuals in foraging groups have more status than others, and some exercise leadership, but this leadership is ephemeral and informal, and those who become too exploitative in their dominance are subject to social pressure that levels everyone to preserve individual autonomy. In agrarian societies, this natural disposition of some individuals to dominate others is magnified, and much of the rest of the social and political history of human beings is the history of exploitative dominance and of the efforts of subordinates to resist such exploitation. On this point, I follow Chris Boehm on the tension between the desire of some for dominance and the desire of many to resist exploitative dominance. I have developed this point in Darwinian Conservatism.
On a somewhat related point, I think Christian does not give enough attention to the history of political philosophy, and thus he does not see that many of his ideas about human social evolution were well understood by social theorists. Not only that, but some of these political philosophers even tried to consciously shape the movement of that evolution.
Now, of course, I understand that in such a "big history" as Christian's, one must abstract from the details of particular events and individual behavior. But still, as Christian indicates, evolutionary history is contingent in that it often turns crucially on particular events. For example, the asteroid impact that led to the extinction of dinosaurs 65 million years ago altered the whole course of the evolution of life on earth. Similarly, Christian concedes that major political changes often depend on "the decisions and actions of individuals" (481). This would include individual thinkers who understood the general pattern in the natural history of society and who might try to shape that pattern.
For example, Christian notes that the European contact with the New World beginning in the 16th century created the first global exchange network that led to the modern era, and he also notes that this new global knowledge contributed to the intellectual evolution of modern science (393-94). One manifestation of this would be how the traveler's reports from the new world influenced political philosophers like Montaigne, Hobbes, Locke, and Rousseau. For instance, Locke's Second Treatise of Government lays out the same natural history of society--from foraging societies to agrarian societies to commercial societies--that Christian develops in his book. But what one sees in Locke's work is a conscious argument for moving into the era of commercial society based on private property, free market exchange, and limited government. Political philosophers like Locke did not just see this evolutionary movement at work. They also set themselves to promote it and direct it. Their cultural and political influence on individual thinkers and politicians would explain much of the course of that social evolutionary history that Christian describes. But Christian never ponders the importance of those political thinkers like Locke who understood what was happening and who tried to direct it.
Studying Locke's argument for liberal democratic capitalism might also have led Christian to reconsider his Marxist tendency to disparage "consumer capitalism" (446-457, 472-81). Although he admits that the Marxist regimes of the 20th century did not offer a good alternative to capitalism, he still cannot give up his Marxist scorn for capitalism. He is surely entitled to lay out his criticisms of modern capitalist society. But he doesn't show that he has thought through the classical liberal argument for capitalism and limited government--an argument laid out by thinkers like Locke, Smith, and Hayek.
I should also say, however, that scholars of political philosophy would benefit from studying Christian's "big history" and seeing how it might illuminate the broad historical movements that set the terms for debate in the history of political philosophy.
While conceding that capitalism has been the primary engine for the explosive population growth and technological innovation of the past 250 years, Christian worries that unchecked capitalist growth over the next century could bring environmental and social collapse. The only way to achieve "sustainable development" is for capitalism to be steered by governmental regulation to limit growth. But Christian does not respond to the arguments of those like Terry Anderson who promote "free market environmentalism" based on the idea that the best path to sustainable development is through private property and the rule of law. Christian's worry about a future of depleted resources reflects his devotion to the ideas of Thomas Malthus. But those like Anderson would argue that Malthus failed to see how Adam Smith's vision of capitalism laid out a path to sustainable development that remains as powerful for the future as it has been for the last couple of centuries.
Christian concludes his book by speculating on the future of the universe long after human beings and all life has been extinguished by the expansion and then cooling of the sun. "The universe will be a dark, cold place, filled only with black holes and stray subatomic particles that wander light-years apart from each other" (489). He concludes that while the creation of human beings as the only intelligent species capable of contemplating the universe might appear to be the ultimate purpose of the universe's creation from nothing, there is no scientific justification for this. "Modern science offers no good reason for believing in such anthropocentrism. Instead, it seems, we are one of the more exotic creations of a universe in the most youthful, exuberant, and productive phase of a very long life. Though we no longer see ourselves as the center of the universe or the ultimate reason for its existence, this may still be grandeur enough for many of us."
This sounds a lot like Friedrich Nietzsche in "On Truth and Lie in an Extra-Moral Sense": "In some remote corner of the universe, poured out and glittering in innumerable solar systems, there once was a star on which clever animals invented knowledge. That was the haughtiest and most mendacious minute of 'world history'--yet only a minute. After nature had drawn a few breaths the star grew cold, and the clever animals had to die."
Nietzsche thought this was a "deadly truth" of modern science. But he also thought that human beings might learn to embrace a "gay science" in which reverence for earthly life in all of its transience and contingency might be enough for them. Is there grandeur enough in this vision?
Traditionalist conservatives and classical liberals need Charles Darwin. They need him because a Darwinian science of human nature supports Burkean conservatives and Lockean liberals in their realist view of human imperfectibility, and in their commitment to ordered liberty as rooted in natural desires, cultural traditions, and prudential judgments. Arnhart's email address is larnhart1@niu.edu.
Saturday, June 28, 2008
Sunday, June 22, 2008
Brain Imaging Is Not Mind Reading
Last November 11th, The New York Times published an op-ed article with the title "This Is Your Brain on Politics." The authors--neuroscientists and political scientists--reported that they had done brain scans of 20 swing voters responding to images of the leading American presidential candidates. For example, they wrote: "Mitt Romney shows potential. Of all the candidate's speech excerpts, Mr. Romney's sparked the greatest amount of brain activity, especially among the men we observed. His still photos prompted a significant amount of activity in the amygdala, indicating voter anxiety, but when the subjects saw him and heard his video, their anxiety died down. Perhaps voters will become more comfortable with Mr. Romney as they see more of him." They also reported that John McCain and Barack Obama elicited "a notable lack of any powerful reactions, positive or negative." This article provoked a letter to the editor signed by 17 neuroscientists who protested the publication of such a crude brain imaging study that had no scientific basis. They noted that "a one-to-one mapping between a brain region and a mental state is not possible." They also indicated that the amygdala cannot provide a brain marker of anxiety, because the amygdala is activated not just by anxiety but also by various kinds of arousal and positive emotions.
This illustrates what I called last year "the brain imaging fallacy." That post can be found here. The fallacy is the assumption that a brain image--as in functional magnetic resonance imaging (fMRI)--is a photograph of the mind at work. Almost every day, there is a new report from proponents of "neuroeconomics," "neurolaw," or "neuroethics" who claim to have brain scans showing the mental activity associated with economic, legal, or ethical behavior.
There seems to be a growing recognition in the scientific community about the serious ramifications of this brain imaging fallacy. Recently, the two most important journals of science have published articles on this problem. Nature (June 12, 2008) has published an article by Nikos Logothetis on "What we can and what we cannot do with fMRI." Science (June 13, 2008) has published an article by Greg Miller on "Growing Pains for fMRI." The scientific and philosophical difficulties with brain imaging have been surveyed recently in an article by Adina Roskies in the journal Neuroethics. The winter, 2008 issue of The New Atlantis has articles by O. Carter Snead and Matthew Crawford on the implications of "neuro-talk" for law and morality.
Here are some of the difficulties that contribute to the brain imaging fallacy.
1. Functional neuroimaging is not a direct measure of neuronal activity because it measures only indirect surrogates--particularly, the flow of oxygenated blood.
2. The interpretation of these brain scans depends on elaborate theoretical assumptions about the relationships between blood flow and neural activity.
3. Many different kinds of brain activity can produce the same fMRI signal.
4. No two brains are alike in their structure or functioning, and therefore brain scans require an averaging procedure.
5. Interpreting these brain scans assumes that mental activity can be decomposed into distinct parts corresponding to distinct parts of the brain, although the interconnectedness of brain activity make this unlikely to be completely true.
6. Although the sensory and motor activities of the brain do seem to be clearly localized in identifiable parts of the brain, it is much more speculative to assign the higher cognitive activities to particular parts of the brain.
7. Although brain scans and neuroscience generally show clearly that the brain supports mental activity, this does not show how the brain does this.
8. Since the mind is an emergent property of the brain, the mind depends on, but is not simply reducible to the brain; and this gap is most evident in the contrast between our subjective experience of consciousness and the objective study of the brain in neuroscience. There will always be a gap between our scientific observation of brains at work and our introspective knowledge that comes from our first-hand experience.
As I have argued on this blog and in Darwinian Conservatism, the scientific study of the human mind can clarify but not resolve one of the deepest mysteries of human experience--the emergence of the soul in the brain.
Those who interpret brain imaging as supporting a reductionist materialism conclude that this denies the traditional moral and legal standards of individual choice and responsibility. But once we recognize the brain imaging fallacy, and see the mind as an emergent product of the brain, it becomes clear that neuroscience does not contradict our traditional understanding of moral choice and legal responsibility as rooted in the power of the mind for deliberate thought and action.
This illustrates what I called last year "the brain imaging fallacy." That post can be found here. The fallacy is the assumption that a brain image--as in functional magnetic resonance imaging (fMRI)--is a photograph of the mind at work. Almost every day, there is a new report from proponents of "neuroeconomics," "neurolaw," or "neuroethics" who claim to have brain scans showing the mental activity associated with economic, legal, or ethical behavior.
There seems to be a growing recognition in the scientific community about the serious ramifications of this brain imaging fallacy. Recently, the two most important journals of science have published articles on this problem. Nature (June 12, 2008) has published an article by Nikos Logothetis on "What we can and what we cannot do with fMRI." Science (June 13, 2008) has published an article by Greg Miller on "Growing Pains for fMRI." The scientific and philosophical difficulties with brain imaging have been surveyed recently in an article by Adina Roskies in the journal Neuroethics. The winter, 2008 issue of The New Atlantis has articles by O. Carter Snead and Matthew Crawford on the implications of "neuro-talk" for law and morality.
Here are some of the difficulties that contribute to the brain imaging fallacy.
1. Functional neuroimaging is not a direct measure of neuronal activity because it measures only indirect surrogates--particularly, the flow of oxygenated blood.
2. The interpretation of these brain scans depends on elaborate theoretical assumptions about the relationships between blood flow and neural activity.
3. Many different kinds of brain activity can produce the same fMRI signal.
4. No two brains are alike in their structure or functioning, and therefore brain scans require an averaging procedure.
5. Interpreting these brain scans assumes that mental activity can be decomposed into distinct parts corresponding to distinct parts of the brain, although the interconnectedness of brain activity make this unlikely to be completely true.
6. Although the sensory and motor activities of the brain do seem to be clearly localized in identifiable parts of the brain, it is much more speculative to assign the higher cognitive activities to particular parts of the brain.
7. Although brain scans and neuroscience generally show clearly that the brain supports mental activity, this does not show how the brain does this.
8. Since the mind is an emergent property of the brain, the mind depends on, but is not simply reducible to the brain; and this gap is most evident in the contrast between our subjective experience of consciousness and the objective study of the brain in neuroscience. There will always be a gap between our scientific observation of brains at work and our introspective knowledge that comes from our first-hand experience.
As I have argued on this blog and in Darwinian Conservatism, the scientific study of the human mind can clarify but not resolve one of the deepest mysteries of human experience--the emergence of the soul in the brain.
Those who interpret brain imaging as supporting a reductionist materialism conclude that this denies the traditional moral and legal standards of individual choice and responsibility. But once we recognize the brain imaging fallacy, and see the mind as an emergent product of the brain, it becomes clear that neuroscience does not contradict our traditional understanding of moral choice and legal responsibility as rooted in the power of the mind for deliberate thought and action.
Sunday, June 15, 2008
Are Conservatism and Liberalism Genetically Inherited?
In 2005, the American Political Science Review published an article by John Alford, Carolyn Funk, and John Hibbing entitled "Are Political Orientations Genetically Transmitted?." Relying on studies of twins that compare the traits of monozygotic (identical) twins and dizygotic (fraternal) twins, they concluded that propensities towards "conservatism" or "liberalism" are indeed strongly influenced by genes, because genetics accounts for approximately half of the variance in ideology. This article received wide coverage in the popular press, including an article in the New York Times.
Now, in the June 2008 issue of Perspectives on Politics--published by the American Political Science Association--there is a critique of this article by Evan Charney along with responses to Charney by Alford, Funk, and Hibbing in one article and by Rebecca Hannagan and Peter Hatemi in another. The debate in these articles provides a good introduction to some of the controversies surrounding behavior genetics, particularly as applied to the social sciences.
Unfortunately, one sees here a recurrent problem in the debate over the genetics of human social behavior--both sides in the debate tend to employ "straw man" arguments. Like many critics of behavior genetics, Charney attacks this field of study for promoting genetic determinism. Like many proponents of behavior genetics, Alford, Funk, Hibbing, Hannagan, and Hatemi attack their critics as environmental determinists. But when one looks carefully, one can see that no one is defending genetic determinism, and no one is defending environmental determinism. In fact, everyone in the debate agrees that human behavior arises from a complex interaction of genes and environment. But instead of trying to work through this interactionist complexity, the participants in the debate ridicule their opponents by attributing to them positions that they don't really take.
Charney says that he is resisting the tendency "to view ever more complex attitudes or systems of belief as in some sense genetically determined (or 'heritable')" (299). He disagrees with the article by Alford, Funk, and Hibbing as claiming "that political orientations are genetically determined" (300). But this is not what Alford, Funk, and Hibbing say. Instead, they say that they have presented evidence "that political orientations are transmitted genetically as well as culturally" (321), and they emphasize that "genes and the environment interact in a complex fashion." Similarly, Hannagan and Hatemi say that "genetic factors exert their influence on an organism in a particular environment such that any trait must be a combination of the two factors," and so "it may be the case that the more we learn about genes the more we discover the importance of relevant environmental influences on behavior" (332-33). It seems then that Charney's attack on genetic determinism is an attack on a straw man.
But the same rhetoric strategy is employed by the other side. Alford, Funk, and Hibbing say that they are challenging "environmental determinism," which claims "that genes are irrelevant to human behavior" (321, 325). They also attack Charney as a "dualist" who assumes an absolute separation between mind and body. Similarly, Hannagan and Hatemi criticize Charney's "exclusively environmental explanations" as assuming a "social determinism" that is implausible (330, 333). But then Charney says that any claim "that genes are irrelevant to human behavior" is "preposterous" and is not the claim he has made (339-40). He also indicates that nowhere in his article does he endorse the "dualism" attributed to him by Alford, Funk, and Hibbing.
This straw-man argumentation rests on a false dichotomy of nature and nurture. Almost no one believes that human behavior is determined completely by genetic nature. And almost no one believes that human behavior is determined completely by social nurture. Almost everyone who examines the relevant evidence and argument would have to conclude that for complex human behavior the causes must be both genetic and cultural.
There are good reasons to agree with Alford, Funk, and Hibbing that there are some genetically inherited propensities of temperament that influence one's political ideology. But what one inherits directly is not the tendency to "conservatism" or "liberalism," but "orientations to bedrock principles of group life" that might have an evolutionary history going back to our earliest evolutionary ancestors (324-25).
For example, one distinction between liberalism and conservatism made by Alford, Funk, and Hibbing is that conservatives tend to be pessimistic about human nature, while liberals tend to be more optimistic. This conforms to what I have said in Darwinian Conservatism about the "realist" view of human nature in conservative thought as opposed to the "utopian" view in liberal thought. But, as Charney indicates, the detailed elaboration of conservative and liberal thought is a product of culturally contingent circumstances that arose over the history of Europe and the United States over the last two centuries (for instance, the divergent reactions to the French Revolution). Consequently, the political debate between conservatives and liberals will manifest a complex interaction of inherited temperament and cultural experience.
Now, in the June 2008 issue of Perspectives on Politics--published by the American Political Science Association--there is a critique of this article by Evan Charney along with responses to Charney by Alford, Funk, and Hibbing in one article and by Rebecca Hannagan and Peter Hatemi in another. The debate in these articles provides a good introduction to some of the controversies surrounding behavior genetics, particularly as applied to the social sciences.
Unfortunately, one sees here a recurrent problem in the debate over the genetics of human social behavior--both sides in the debate tend to employ "straw man" arguments. Like many critics of behavior genetics, Charney attacks this field of study for promoting genetic determinism. Like many proponents of behavior genetics, Alford, Funk, Hibbing, Hannagan, and Hatemi attack their critics as environmental determinists. But when one looks carefully, one can see that no one is defending genetic determinism, and no one is defending environmental determinism. In fact, everyone in the debate agrees that human behavior arises from a complex interaction of genes and environment. But instead of trying to work through this interactionist complexity, the participants in the debate ridicule their opponents by attributing to them positions that they don't really take.
Charney says that he is resisting the tendency "to view ever more complex attitudes or systems of belief as in some sense genetically determined (or 'heritable')" (299). He disagrees with the article by Alford, Funk, and Hibbing as claiming "that political orientations are genetically determined" (300). But this is not what Alford, Funk, and Hibbing say. Instead, they say that they have presented evidence "that political orientations are transmitted genetically as well as culturally" (321), and they emphasize that "genes and the environment interact in a complex fashion." Similarly, Hannagan and Hatemi say that "genetic factors exert their influence on an organism in a particular environment such that any trait must be a combination of the two factors," and so "it may be the case that the more we learn about genes the more we discover the importance of relevant environmental influences on behavior" (332-33). It seems then that Charney's attack on genetic determinism is an attack on a straw man.
But the same rhetoric strategy is employed by the other side. Alford, Funk, and Hibbing say that they are challenging "environmental determinism," which claims "that genes are irrelevant to human behavior" (321, 325). They also attack Charney as a "dualist" who assumes an absolute separation between mind and body. Similarly, Hannagan and Hatemi criticize Charney's "exclusively environmental explanations" as assuming a "social determinism" that is implausible (330, 333). But then Charney says that any claim "that genes are irrelevant to human behavior" is "preposterous" and is not the claim he has made (339-40). He also indicates that nowhere in his article does he endorse the "dualism" attributed to him by Alford, Funk, and Hibbing.
This straw-man argumentation rests on a false dichotomy of nature and nurture. Almost no one believes that human behavior is determined completely by genetic nature. And almost no one believes that human behavior is determined completely by social nurture. Almost everyone who examines the relevant evidence and argument would have to conclude that for complex human behavior the causes must be both genetic and cultural.
There are good reasons to agree with Alford, Funk, and Hibbing that there are some genetically inherited propensities of temperament that influence one's political ideology. But what one inherits directly is not the tendency to "conservatism" or "liberalism," but "orientations to bedrock principles of group life" that might have an evolutionary history going back to our earliest evolutionary ancestors (324-25).
For example, one distinction between liberalism and conservatism made by Alford, Funk, and Hibbing is that conservatives tend to be pessimistic about human nature, while liberals tend to be more optimistic. This conforms to what I have said in Darwinian Conservatism about the "realist" view of human nature in conservative thought as opposed to the "utopian" view in liberal thought. But, as Charney indicates, the detailed elaboration of conservative and liberal thought is a product of culturally contingent circumstances that arose over the history of Europe and the United States over the last two centuries (for instance, the divergent reactions to the French Revolution). Consequently, the political debate between conservatives and liberals will manifest a complex interaction of inherited temperament and cultural experience.
Monday, June 09, 2008
Daniel Smail and the Neuroscience of History
At the end of August, I will be presenting a paper in Boston at the annual meetings of the American Political Science Association on "Biopolitical Science: Darwin, Lincoln, and the Deep History of Politics." This paper will be part of a series of panels on "Evolution and Morality" sponsored by the American Society for Political and Legal Philosophy. For my paper, there will be written responses from Daniel Lord Smail, a historian at Harvard, and Richard Richards, a philosopher at the University of Alabama.
The very idea of "deep history" is one that I have borrowed from Smail, particularly as he develops it in his recent book On Deep History and the Brain (University of California Press, 2008). Smail points out that historians today assume that the true "history" of humanity began about 5,000 years ago with the invention of writing, and so everything else that happened to human beings prior to that is dismissed as "prehistory." Noting that there is plenty of evidence for human life prior to writing, Smail suggests there is no good reason why historians shouldn't develop a grand historical narrative that would encompass the whole of human experience from the Paleolithic era to the present.
That few historians today can even conceive of this reflects what Smail calls "the grip of sacred history." In the Western world, the influence of biblical religion led historians for many centuries to assume that history began about 6,000 years ago with the divine creation of Adam and Eve. Even when secular historians in the eighteenth and nineteenth centuries began to break away from the biblical creation story, they still held onto a secular version of this story: they assumed that history began about 6,000 years ago with the invention of agriculture and writing in Mesopotamia. Whatever happened to human beings prior to this was set aside as "prehistory."
In his new book, Smail argues for rejecting the chronological limits of sacred history and pursuing the study of deep history as including the entire genetic and cultural history of humanity. This would require interdisciplinary research combining traditional history and biological science. He offers an outline of how this could be done by concentrating on neurohistory--the history of humanity based on the evolution of the human brain. For this, historians would apply contemporary neuroscience to the study of human history. Historians must always employ psychological assumptions about human motivations in their historical explanations. Smail suggests that neuroscience would provide historians a psychological science rooted in modern biology.
One example of neurohistory for Smail is explaining the human dominance hierarchies that arose after the agricultural revolution allowed small groups of elite individuals to rule over large groups of subordinates. Dominant individuals must employ whatever devices they can to induce submissive dispositions in their subordinates. In matriarchal baboon societies, high-ranking females harass subordinate females in ways that create high levels of stress. Similarly, we might expect that in human dominance hierarchies, high-ranking individuals would intimidate their subordinates in ways that would generate stress hormones that make them feel submissive.
At one level, such an explanation might seem so obvious as to be trivial. After all, if subordinate individuals are submissive towards dominant individuals, one would assume that there must be some kind of biochemical state in the brains of the subordinates associated with a psychic state of submission. But at another level, such a explanation might seem dubious insofar as they are untestable. Because there is no way to gather neurophysiological data from the distant past to support the explanations. Smail admits this, but argues that even without direct tests of his neurophysiological hypotheses, we might find support in the historical sources for such neurohistorical reasoning. Knowledge of human neurophysiology offers an interpretive framework within which we might generate historical explanations that are heuristically powerful.
More generally, Smail argues for viewing cultural history as depending on "psychotropic mechanisms"--behavioral practices and institutional structures that have neurochemical effects that shape human moods. We might view each kind of social order as having a distinct psychotropic profile. For example, we might explain the medieval church as shaping the behavior of believers through ceremonial rituals and religious practices that modulate the chemical messengers in the brain in ways that sustain dependence on the church. But then in the eighteenth century, there was a clear decline in religious activities, which might be correlated with the popularity of new psychotropic commodities like coffee, tobacco, and alcoholic drinks like gin and rum. In fact, much of the public life of European cities in the eighteenth century was centered in coffeehouses and cafes.
To be persuaded of Smail's specific explanations, I would need more evidence and argumentation than he provides in his book. But I can accept his general idea--that every form of social order must appeal to the human nervous system by shaping the neurochemical activity of individuals to support the behavioral patterns of that society. I have argued that there are at least twenty natural desires that are universal to all human societies because they are rooted in human biological nature, and these twenty natural desires provide a universal basis for moral and political experience. These natural desires constrain but do not rigidly determine the customary traditions of social life. And within the constraints of natural desires and customary traditions, there is still some freedom for the exercise of prudential judgment by individuals. I would say that what Smail calls "psychotropic mechanisms" are the various means by which a social order modulates the neurochemistry corresponding to the twenty natural desires.
Although Smail's neurohistory has much in common with evolutionary psychology, he criticizes the evolutionary psychologists in the tradition of Cosmides and Tooby for being too ahistorical in their reasoning, because they jump from the "environment of evolutionary adaptation" as shaped in the Paleolithic era to the modern present without any account of the historical development from one to the other. Although Smail agrees with the evolutionary psychologists that the genetic evolution of the human species has shaped the human brain to manifest the predispositions of human nature, he thinks the evolutionary psychologists don't go far enough in recognizing the plasticity of that brain and how that neural plasticity makes possible cultural history. Like David Sloan Wilson, Smail emphasizes the need for explaining human behavior as shaped by gene-culture coevolution, and for that we need neurohistory.
I would say that what we need here is a theoretical framework of constrained plasticity that moves through three levels of social order--the natural history of the human species, the cultural history of human groups, and the individual history of the human beings in each group.
Smail has been influenced by David Buller's critique of evolutionary psychology in his book Adapting Minds. Stressing the diversity and contingency of human behavior and thought over evolutionary time, Buller concludes that "human nature is a superstition" (480). But I would say this is just as mistaken as concluding that since no two human beings have exactly the same anatomical structures, human anatomy is a superstition. While recognizing the anatomical diversity and contingency of human beings, the science of human anatomy presumes only that there is sufficient regularity in the patterns of human anatomical structure to warrant the generalizations of anatomy. Similarly, the science of human behavioral nature presumes that there is enough regularity in the patterns of human behavioral nature to warrant the study of human nature. In fact, Smail speaks about the need for broad categories of patterns or tendencies in studying human nature to support any deep history of human life. My account of the twenty natural desires specifies such patterns as manifesting a human nature grounded in the regularity of human neurophysiology.
The very idea of "deep history" is one that I have borrowed from Smail, particularly as he develops it in his recent book On Deep History and the Brain (University of California Press, 2008). Smail points out that historians today assume that the true "history" of humanity began about 5,000 years ago with the invention of writing, and so everything else that happened to human beings prior to that is dismissed as "prehistory." Noting that there is plenty of evidence for human life prior to writing, Smail suggests there is no good reason why historians shouldn't develop a grand historical narrative that would encompass the whole of human experience from the Paleolithic era to the present.
That few historians today can even conceive of this reflects what Smail calls "the grip of sacred history." In the Western world, the influence of biblical religion led historians for many centuries to assume that history began about 6,000 years ago with the divine creation of Adam and Eve. Even when secular historians in the eighteenth and nineteenth centuries began to break away from the biblical creation story, they still held onto a secular version of this story: they assumed that history began about 6,000 years ago with the invention of agriculture and writing in Mesopotamia. Whatever happened to human beings prior to this was set aside as "prehistory."
In his new book, Smail argues for rejecting the chronological limits of sacred history and pursuing the study of deep history as including the entire genetic and cultural history of humanity. This would require interdisciplinary research combining traditional history and biological science. He offers an outline of how this could be done by concentrating on neurohistory--the history of humanity based on the evolution of the human brain. For this, historians would apply contemporary neuroscience to the study of human history. Historians must always employ psychological assumptions about human motivations in their historical explanations. Smail suggests that neuroscience would provide historians a psychological science rooted in modern biology.
One example of neurohistory for Smail is explaining the human dominance hierarchies that arose after the agricultural revolution allowed small groups of elite individuals to rule over large groups of subordinates. Dominant individuals must employ whatever devices they can to induce submissive dispositions in their subordinates. In matriarchal baboon societies, high-ranking females harass subordinate females in ways that create high levels of stress. Similarly, we might expect that in human dominance hierarchies, high-ranking individuals would intimidate their subordinates in ways that would generate stress hormones that make them feel submissive.
At one level, such an explanation might seem so obvious as to be trivial. After all, if subordinate individuals are submissive towards dominant individuals, one would assume that there must be some kind of biochemical state in the brains of the subordinates associated with a psychic state of submission. But at another level, such a explanation might seem dubious insofar as they are untestable. Because there is no way to gather neurophysiological data from the distant past to support the explanations. Smail admits this, but argues that even without direct tests of his neurophysiological hypotheses, we might find support in the historical sources for such neurohistorical reasoning. Knowledge of human neurophysiology offers an interpretive framework within which we might generate historical explanations that are heuristically powerful.
More generally, Smail argues for viewing cultural history as depending on "psychotropic mechanisms"--behavioral practices and institutional structures that have neurochemical effects that shape human moods. We might view each kind of social order as having a distinct psychotropic profile. For example, we might explain the medieval church as shaping the behavior of believers through ceremonial rituals and religious practices that modulate the chemical messengers in the brain in ways that sustain dependence on the church. But then in the eighteenth century, there was a clear decline in religious activities, which might be correlated with the popularity of new psychotropic commodities like coffee, tobacco, and alcoholic drinks like gin and rum. In fact, much of the public life of European cities in the eighteenth century was centered in coffeehouses and cafes.
To be persuaded of Smail's specific explanations, I would need more evidence and argumentation than he provides in his book. But I can accept his general idea--that every form of social order must appeal to the human nervous system by shaping the neurochemical activity of individuals to support the behavioral patterns of that society. I have argued that there are at least twenty natural desires that are universal to all human societies because they are rooted in human biological nature, and these twenty natural desires provide a universal basis for moral and political experience. These natural desires constrain but do not rigidly determine the customary traditions of social life. And within the constraints of natural desires and customary traditions, there is still some freedom for the exercise of prudential judgment by individuals. I would say that what Smail calls "psychotropic mechanisms" are the various means by which a social order modulates the neurochemistry corresponding to the twenty natural desires.
Although Smail's neurohistory has much in common with evolutionary psychology, he criticizes the evolutionary psychologists in the tradition of Cosmides and Tooby for being too ahistorical in their reasoning, because they jump from the "environment of evolutionary adaptation" as shaped in the Paleolithic era to the modern present without any account of the historical development from one to the other. Although Smail agrees with the evolutionary psychologists that the genetic evolution of the human species has shaped the human brain to manifest the predispositions of human nature, he thinks the evolutionary psychologists don't go far enough in recognizing the plasticity of that brain and how that neural plasticity makes possible cultural history. Like David Sloan Wilson, Smail emphasizes the need for explaining human behavior as shaped by gene-culture coevolution, and for that we need neurohistory.
I would say that what we need here is a theoretical framework of constrained plasticity that moves through three levels of social order--the natural history of the human species, the cultural history of human groups, and the individual history of the human beings in each group.
Smail has been influenced by David Buller's critique of evolutionary psychology in his book Adapting Minds. Stressing the diversity and contingency of human behavior and thought over evolutionary time, Buller concludes that "human nature is a superstition" (480). But I would say this is just as mistaken as concluding that since no two human beings have exactly the same anatomical structures, human anatomy is a superstition. While recognizing the anatomical diversity and contingency of human beings, the science of human anatomy presumes only that there is sufficient regularity in the patterns of human anatomical structure to warrant the generalizations of anatomy. Similarly, the science of human behavioral nature presumes that there is enough regularity in the patterns of human behavioral nature to warrant the study of human nature. In fact, Smail speaks about the need for broad categories of patterns or tendencies in studying human nature to support any deep history of human life. My account of the twenty natural desires specifies such patterns as manifesting a human nature grounded in the regularity of human neurophysiology.
Tuesday, June 03, 2008
Stephen Morse and the Neuroscience of Law
It seems that recent research in neuroscience is challenging traditional conceptions of free will and legal responsibility. Neuroscience shows that the brain is a physical entity governed by natural causes and is thus just as deterministic as the rest of the physical world. Neuroscience also shows how the brain determines the mind. It seems to follow that the mind's thoughts and choices are determined. But if that is so, then it seems that free will is an illusion, because no one is really responsible for his or her choices. This would deny the traditional understanding of responsibility in morality and law. People cannot be held morally or legally responsible for their choices, because they can always say, My brain made me do it!
If this were true, it would be a social catastrophe unprecedented in human history, because it would abolish the psychological foundations of our moral and legal systems. The fear of such a catastrophe motivates much of the fear of modern biological explanations of human nature. One can see that fear, for example, in the writing of intelligent design creationists like John West. (West has been the subject of many of my posts on this blog.) These folks assume that we cannot hold people morally and legally responsible for their behavior unless we invoke the idea that human beings have been created in God's image with a spiritual soul that cannot be explained as a product of natural evolutionary causes, because only such a supernatural soul can exercise free will in acting as an uncaused cause that transcends the realm of natural causes as studied by science.
But as I have argued in Darwinian Natural Right, in Darwinian Conservatism, and on this blog, this fear is unwarranted, because biological explanations of human nature in general and of the human brain in particular are fully compatible with traditional conceptions of moral and legal responsibility.
I agree with Jonathan Edwards, David Hume, and others who argue that moral responsibility and natural causation are compatible. But to see this compatibility, we must reject the idea of "free will" as uncaused cause. Whatever comes into existence must have a cause. Only what is self-existent from eternity--God--could be uncaused or self-determined. The commonsense notion of liberty is power to act as one chooses regardless of the cause of the choice. Human freedom of choice is not freedom from nature but a natural freedom to deliberate about our natural desires so that we can organize and manage our desires through habituation and reflection to conform to some conception of a whole life well lived. This is how Aristotle understood moral choice.
Similarly, Darwin believed that "every action whatever is the effect of a motive," and therefore he doubted the existence of "free will." Our motives arise from a complex interaction of innate temperament, individual experience, social learning, and external conditions. Still, although we are not absolutely free of the causal regularities of nature, Darwin believed, we are morally responsible for our actions because of our uniquely human capacity for reflecting on our motives and circumstances and acting in the light of those reflections. "A moral being is one who is capable of reflecting on his past actions and their motives--of approving of some and disapproving of others; and the fact that man is the one being who certainly deserves this designation is the greatest of all distinctions between him and the lower animals."
If we understand moral responsibility in this way, and see this as the conception of responsibility assumed in the law, then neuroscientific research on the natural causality of the brain is no threat to moral and legal responsibility. Stephen Morse--a law professor at the University of Pennsylvania Law School who specializes in psychology and law--has laid out the case for this conclusion based on a "compatibilist" view of moral choice. He has done this in "New Neuroscience, Old Problems," a paper published in two books--Neuroscience and the Law (2004), edited by Brent Garland, and Defining Right and Wrong in Brain Science (2007), edited by Walter Glannon, both published by Dana Press.
As Morse indicates, the "hard determinists" and the "metaphysical libertarians" agree that "free will" would require a "contra-causal freedom." But while the determinists deny there is such a thing. The libertarians affirm its existence as an uncaused cause beyond natural causality. If we had to choose between these two positions, neuroscience would favor the determinists.
But Morse rightly argues that the law's conception of responsibility does not require a "contra-causal freedom." It requires only that human beings have sufficient practical rationality to understand their choices and to act on their deliberate decisions. When rationality is so diminished that someone cannot understand or act on his choices--a child or someone who is insane, for example--then we excuse their behavior and do not hold them fully responsible for their actions. But this conception of moral and legal responsibility as based on the capacity for practical deliberation or rationality does not require any transcendence of natural causality.
Under the "compatibilist" conception of responsibility that Morse, Darwin, and I defend, research in neuroscience can have some interesting implications for law, but it poses no fundamental challenge to the traditional understanding of legal responsibility. Neuroscience can tell us a lot about the natural causes in the brain that predispose human beings in one direction or another. But this does not deny what Darwin recognized as the uniqueness of a human beings as moral beings capable of reflecting on their circumstances and acting on the basis of past experience and future expectations. This is not free will as an uncaused cause, but it is the natural freedom that human beings have as a product of their evolved nature.
If this were true, it would be a social catastrophe unprecedented in human history, because it would abolish the psychological foundations of our moral and legal systems. The fear of such a catastrophe motivates much of the fear of modern biological explanations of human nature. One can see that fear, for example, in the writing of intelligent design creationists like John West. (West has been the subject of many of my posts on this blog.) These folks assume that we cannot hold people morally and legally responsible for their behavior unless we invoke the idea that human beings have been created in God's image with a spiritual soul that cannot be explained as a product of natural evolutionary causes, because only such a supernatural soul can exercise free will in acting as an uncaused cause that transcends the realm of natural causes as studied by science.
But as I have argued in Darwinian Natural Right, in Darwinian Conservatism, and on this blog, this fear is unwarranted, because biological explanations of human nature in general and of the human brain in particular are fully compatible with traditional conceptions of moral and legal responsibility.
I agree with Jonathan Edwards, David Hume, and others who argue that moral responsibility and natural causation are compatible. But to see this compatibility, we must reject the idea of "free will" as uncaused cause. Whatever comes into existence must have a cause. Only what is self-existent from eternity--God--could be uncaused or self-determined. The commonsense notion of liberty is power to act as one chooses regardless of the cause of the choice. Human freedom of choice is not freedom from nature but a natural freedom to deliberate about our natural desires so that we can organize and manage our desires through habituation and reflection to conform to some conception of a whole life well lived. This is how Aristotle understood moral choice.
Similarly, Darwin believed that "every action whatever is the effect of a motive," and therefore he doubted the existence of "free will." Our motives arise from a complex interaction of innate temperament, individual experience, social learning, and external conditions. Still, although we are not absolutely free of the causal regularities of nature, Darwin believed, we are morally responsible for our actions because of our uniquely human capacity for reflecting on our motives and circumstances and acting in the light of those reflections. "A moral being is one who is capable of reflecting on his past actions and their motives--of approving of some and disapproving of others; and the fact that man is the one being who certainly deserves this designation is the greatest of all distinctions between him and the lower animals."
If we understand moral responsibility in this way, and see this as the conception of responsibility assumed in the law, then neuroscientific research on the natural causality of the brain is no threat to moral and legal responsibility. Stephen Morse--a law professor at the University of Pennsylvania Law School who specializes in psychology and law--has laid out the case for this conclusion based on a "compatibilist" view of moral choice. He has done this in "New Neuroscience, Old Problems," a paper published in two books--Neuroscience and the Law (2004), edited by Brent Garland, and Defining Right and Wrong in Brain Science (2007), edited by Walter Glannon, both published by Dana Press.
As Morse indicates, the "hard determinists" and the "metaphysical libertarians" agree that "free will" would require a "contra-causal freedom." But while the determinists deny there is such a thing. The libertarians affirm its existence as an uncaused cause beyond natural causality. If we had to choose between these two positions, neuroscience would favor the determinists.
But Morse rightly argues that the law's conception of responsibility does not require a "contra-causal freedom." It requires only that human beings have sufficient practical rationality to understand their choices and to act on their deliberate decisions. When rationality is so diminished that someone cannot understand or act on his choices--a child or someone who is insane, for example--then we excuse their behavior and do not hold them fully responsible for their actions. But this conception of moral and legal responsibility as based on the capacity for practical deliberation or rationality does not require any transcendence of natural causality.
Under the "compatibilist" conception of responsibility that Morse, Darwin, and I defend, research in neuroscience can have some interesting implications for law, but it poses no fundamental challenge to the traditional understanding of legal responsibility. Neuroscience can tell us a lot about the natural causes in the brain that predispose human beings in one direction or another. But this does not deny what Darwin recognized as the uniqueness of a human beings as moral beings capable of reflecting on their circumstances and acting on the basis of past experience and future expectations. This is not free will as an uncaused cause, but it is the natural freedom that human beings have as a product of their evolved nature.
Sunday, June 01, 2008
Louann Brizendine and the Neuroscience of the Female Brain
A few decades ago, the prevailing opinion among academic intellectuals in Western societies was that men and women were born with the same brains, but that the patriarchal socialization of children by cultural learning created stereotypical differences in male and female thought and behavior that favor male oppression of women. This suggested that social reformers could liberate women from patriarchal oppression by changing the environment of social learning to promote an androgynous society in which men and women would learn to think and act in virtually identical ways. But in recent decades, research in the biology of sex differences has challenged this position by showing how the biological nature of men and women differ in ways that are not explainable as products of social construction, and thus there is a natural limit to how far cultural changes can go in creating an androgynous society.
In Darwinian Natural Right and Darwinian Conservatism, I have drawn from that new research on the biology of sex differences to support my claim that men and women really are different by nature, but that those natural differences suggest not the superiority of men over women but the complementarity of male and female norms, in which the sometimes predatory propensities of men might be moderated by the civilizing prudence of women.
Most recently, advances in neuroscience have deepened our knowledge of how the differing neural and hormonal systems of men and women shape somewhat differing brains. Much of this new research come from improvements in brain-imaging technology. (Actually, there are some problems coming from reading too much into these brain images. But that's a post for another day.) Louann Brizendine's The Female Brain (New York: Morgan Road Books, 2006) surveys this research refuting the idea of the "unisex brain" and supporting the idea that female brains differ biologically from male brains.
The popular success of Brizendine's book comes from her combining impressive credentials and a lively writing style. She is a neuropsychiatrist at the University of California, San Francisco, and the founder of the Women's and Teen Girls' Mood and Hormone Clinic. She has advanced degrees in medicine and neurobiology from Yale and the University of California, Berkeley. She translates her expert knowledge into engaging writing by using anecdotes about the women she has counseled and by adopting a light and witty style. The main text of the book is written without any footnotes. But it's followed by 80 pages of citations and references to the scientific research supporting her claims.
Some of Brizendine's academic critics are put off by the cutesy style of the writing. For example, she identifies the hormone oxytocin as "fluffy, purring kitty; cuddly, nurturing, earth mother; the good witch Glinda in The Wizard of Oz; finds pleasure in helping and serving; sister to vasopressin (the male socializing hormone), sister to estrogen, friend of dopamine (another feel-good brain chemical" (xv). Of course, this is not the language of scientific research reports. But I can't see what's wrong with such language in a book written for a wide, popular audience, as long as the clever writing is not deceptive.
A more serious criticism is that sometimes the citations in the back of Brizendine's book don't clearly support the claims she makes in her main text. For example, in speaking about how women tend to be more talkative than men, she writes: "Men use about seven thousand words per day. Women use about twenty thousand" (14). Readers who wonder where she got these numbers might check the notes, where they discover that they come from a self-help book that cites no specific research to sustain this claim. Similarly, she writes: "Girls speak faster on average--250 words per minute versus 125 for typical males" (36). Readers who consult the notes will see a reference to an article about "speaking rates" among "20 pre-school stuttering and non-stuttering children and their mothers," which actually says nothing about differences between girls and boys in their speaking.
Here's one more example of the dubious connection between Brizendine's text and her citations. She says that one of the reasons why women have a better memory for emotional details than do men is that "a woman's amygdala is more easily activated by emotional nuance" (128). Her supporting reference is an article by Stephen Hamann, "Sex Differences in the Responses of the Human Amygdala". If you read the article, you will see that the conclusions of the article are much more complicated than what Brizendine reports. Accoring to Hamann's survey of the research, high amygdala activity is related to emotional memory for both man and women; but for men, it's the right amygdala, while for women its the left amygdala. And yet, Hamann notes, conscious ratings of emotional arousal correlate with the left amygdala for both men and women. Hamann then suggests that women's greater emotional memory might be a product of the fact that there is more overlap of brain regions for the women. Moreover, Hamann also shows that men manifest greater amygdala activity in response to visual sexual stimuli, in contrast to women. This is, I think, a typical example of how Brizendine often oversimplifies the research she's citing to make it sharper for her purposes.
Another example of this runs throughout the book, and it can be seen easily by an attentive reader, even without checking her citations. Oxytocin is one of the main protagonists in Brizendine's story. The Greek etymology of "oxytocin" denotes "quick delivery," because this hormone stimulates contractions of the uterus during child birth. Farmers inject their pregnant animals with oxytocin to induce delivery. But Brizendine speaks of oxytocin as associated not just with child birth but also with love, trust, orgasm, hugging, calming breast-feeding babies, and monogamous pair-bonding. But she never acknowledges what should be evident here to any careful reader: oxytocin does nothing by itself, because its diverse effects depend upon the diverse contexts in which it appears. This sort of causal complexity and contextuality is common in biology. But Brizendine tends to play it down in order to tell a clear, engaging story.
At the root of this problem is the big issue of nature versus nurture. Brizendine is sometimes confusing in how she approaches this issue. In her Introduction, for example, she says that female aptitudes are naturally "hardwired into the brains of women" (8). This suggests that she is a biological determinist, and it opens her up to all of the reasonable criticisms of such determinism. But then on the next page, she speaks of how the female brain is shaped "by a combination of nature and nurture" (9). Elsewhere in the book, she writes about how female propensities reflect the complex interaction of genes, hormones, life history, and social interactions (20, 27, 55, 68-69, 74, 100, 110, 133, 136-37, 147).
Here she comes close to the idea of "nurturing nature" that I lay out--in Darwinian Natural Right and Darwinian Conservatism--as an alternative to the false dichotomy between nature and nurture. To fully account for human nature, we need to see human thought and action as arising from the complex interaction of natural propensities, cultural traditions, and individual judgments. In fact, many of Brizendine's stories about the women she counsels illustrate that complexity of human experience.
This certainly holds true, for example, in what she says about the "mommy brain." Most women are naturally inclined to reproduction and child care, and that natural inclination is evident in their brains as evolved products of human evolution. But the specific expression of that natural propensity will vary according to the social circumstances of life and the individual judgments of the women making the best of their lives facing the sometimes difficult trade-offs presented to them.
As I indicated a few weeks ago in my post on Wendell Berry and E. O. Wilson, I agree with Berry that to understand the human mind we something more than the formula "mind = brain = machine." We need the more complex formula "mind = brain + body + world + local dwelling place + community + history," with "history" encompassing "the whole heritage of culture, language, memory, tools, and skills."
If one keeps in mind the emergent complexity of human life--and thus rejects any reductionist determinism--then one can read Brizendine's book as a rich survey of the female brain, the male brain, and the human brain shared by both men and women.
In Darwinian Natural Right and Darwinian Conservatism, I have drawn from that new research on the biology of sex differences to support my claim that men and women really are different by nature, but that those natural differences suggest not the superiority of men over women but the complementarity of male and female norms, in which the sometimes predatory propensities of men might be moderated by the civilizing prudence of women.
Most recently, advances in neuroscience have deepened our knowledge of how the differing neural and hormonal systems of men and women shape somewhat differing brains. Much of this new research come from improvements in brain-imaging technology. (Actually, there are some problems coming from reading too much into these brain images. But that's a post for another day.) Louann Brizendine's The Female Brain (New York: Morgan Road Books, 2006) surveys this research refuting the idea of the "unisex brain" and supporting the idea that female brains differ biologically from male brains.
The popular success of Brizendine's book comes from her combining impressive credentials and a lively writing style. She is a neuropsychiatrist at the University of California, San Francisco, and the founder of the Women's and Teen Girls' Mood and Hormone Clinic. She has advanced degrees in medicine and neurobiology from Yale and the University of California, Berkeley. She translates her expert knowledge into engaging writing by using anecdotes about the women she has counseled and by adopting a light and witty style. The main text of the book is written without any footnotes. But it's followed by 80 pages of citations and references to the scientific research supporting her claims.
Some of Brizendine's academic critics are put off by the cutesy style of the writing. For example, she identifies the hormone oxytocin as "fluffy, purring kitty; cuddly, nurturing, earth mother; the good witch Glinda in The Wizard of Oz; finds pleasure in helping and serving; sister to vasopressin (the male socializing hormone), sister to estrogen, friend of dopamine (another feel-good brain chemical" (xv). Of course, this is not the language of scientific research reports. But I can't see what's wrong with such language in a book written for a wide, popular audience, as long as the clever writing is not deceptive.
A more serious criticism is that sometimes the citations in the back of Brizendine's book don't clearly support the claims she makes in her main text. For example, in speaking about how women tend to be more talkative than men, she writes: "Men use about seven thousand words per day. Women use about twenty thousand" (14). Readers who wonder where she got these numbers might check the notes, where they discover that they come from a self-help book that cites no specific research to sustain this claim. Similarly, she writes: "Girls speak faster on average--250 words per minute versus 125 for typical males" (36). Readers who consult the notes will see a reference to an article about "speaking rates" among "20 pre-school stuttering and non-stuttering children and their mothers," which actually says nothing about differences between girls and boys in their speaking.
Here's one more example of the dubious connection between Brizendine's text and her citations. She says that one of the reasons why women have a better memory for emotional details than do men is that "a woman's amygdala is more easily activated by emotional nuance" (128). Her supporting reference is an article by Stephen Hamann, "Sex Differences in the Responses of the Human Amygdala". If you read the article, you will see that the conclusions of the article are much more complicated than what Brizendine reports. Accoring to Hamann's survey of the research, high amygdala activity is related to emotional memory for both man and women; but for men, it's the right amygdala, while for women its the left amygdala. And yet, Hamann notes, conscious ratings of emotional arousal correlate with the left amygdala for both men and women. Hamann then suggests that women's greater emotional memory might be a product of the fact that there is more overlap of brain regions for the women. Moreover, Hamann also shows that men manifest greater amygdala activity in response to visual sexual stimuli, in contrast to women. This is, I think, a typical example of how Brizendine often oversimplifies the research she's citing to make it sharper for her purposes.
Another example of this runs throughout the book, and it can be seen easily by an attentive reader, even without checking her citations. Oxytocin is one of the main protagonists in Brizendine's story. The Greek etymology of "oxytocin" denotes "quick delivery," because this hormone stimulates contractions of the uterus during child birth. Farmers inject their pregnant animals with oxytocin to induce delivery. But Brizendine speaks of oxytocin as associated not just with child birth but also with love, trust, orgasm, hugging, calming breast-feeding babies, and monogamous pair-bonding. But she never acknowledges what should be evident here to any careful reader: oxytocin does nothing by itself, because its diverse effects depend upon the diverse contexts in which it appears. This sort of causal complexity and contextuality is common in biology. But Brizendine tends to play it down in order to tell a clear, engaging story.
At the root of this problem is the big issue of nature versus nurture. Brizendine is sometimes confusing in how she approaches this issue. In her Introduction, for example, she says that female aptitudes are naturally "hardwired into the brains of women" (8). This suggests that she is a biological determinist, and it opens her up to all of the reasonable criticisms of such determinism. But then on the next page, she speaks of how the female brain is shaped "by a combination of nature and nurture" (9). Elsewhere in the book, she writes about how female propensities reflect the complex interaction of genes, hormones, life history, and social interactions (20, 27, 55, 68-69, 74, 100, 110, 133, 136-37, 147).
Here she comes close to the idea of "nurturing nature" that I lay out--in Darwinian Natural Right and Darwinian Conservatism--as an alternative to the false dichotomy between nature and nurture. To fully account for human nature, we need to see human thought and action as arising from the complex interaction of natural propensities, cultural traditions, and individual judgments. In fact, many of Brizendine's stories about the women she counsels illustrate that complexity of human experience.
This certainly holds true, for example, in what she says about the "mommy brain." Most women are naturally inclined to reproduction and child care, and that natural inclination is evident in their brains as evolved products of human evolution. But the specific expression of that natural propensity will vary according to the social circumstances of life and the individual judgments of the women making the best of their lives facing the sometimes difficult trade-offs presented to them.
As I indicated a few weeks ago in my post on Wendell Berry and E. O. Wilson, I agree with Berry that to understand the human mind we something more than the formula "mind = brain = machine." We need the more complex formula "mind = brain + body + world + local dwelling place + community + history," with "history" encompassing "the whole heritage of culture, language, memory, tools, and skills."
If one keeps in mind the emergent complexity of human life--and thus rejects any reductionist determinism--then one can read Brizendine's book as a rich survey of the female brain, the male brain, and the human brain shared by both men and women.
Subscribe to:
Posts (Atom)