Thursday, October 31, 2013

Do We Want Our Dead Bodies to be Resurrected to Eternal Life?

As I indicated in a post a few years ago, Thomas Aquinas taught that since "the soul is united to the body as form to matter," the perfection of the soul after the death of the body requires a resurrection of the body and its reunion with the soul. Thus must be so, because "the state of the soul in the body is more perfect than outside the body" (Summa Theologica, suppl., q. 75, a. 1).

Furthermore, according to Aquinas, this resurrected body must be a real living body. And since all living bodies are ageing bodies, the resurrected bodies must have a specific age. Since Jesus rose again at about age 30, that age must be the perfect age for the body, and so, Aquinas reasons, when human beings are resurrected, they will all have bodies of the same age--30 years old. Those who died as children will be moved up to age 30, and those who died in old age will be moved back to age 30 (ST, suppl., q. 81, a. 1).  Here Aquinas is following the authority of Augustine (City of God, xxii.15).

But then we must wonder, when people wish for immortality, is this what they're wishing for--to be frozen eternally at one moment in time?  Would an eternal afterlife without time and ageing really be a human life?  Does this idea of eternally ageless bodies make any sense?  Is Aquinas really serious about this?

Aquinas had to say this to remain faithful to Christian orthodoxy, which affirms both the separation of the immortal soul from the body at death and the later resurrection of the body at the Second Coming of Christ for reunification with the soul, while also affirming that the damned go to eternal punishment in Hell, and the saved go to eternal bliss in Heaven.  This Christian teaching is thus a remarkable synthesis of Plato's teaching about the immortality of the soul (in The Republic, The Laws, and Phaedo) and Saint Paul's teaching about the resurrection of the body (in First Corinthians 15).  Many Christians today have become heretics in that they believe in the immortality of the soul but not in the physical resurrection of the body to eternal life.  To many Christians today, a corpse resurrected to life sounds too much like a zombie to be believable.

Stephen Cave identifies the Resurrection Narrative as the second way to immorality.  If we find it hard to believe that we can live forever by keeping out bodies alive indefinitely, then we might hope that our dead bodies will someday be resurrected back to life.  This has an intuitive appeal to human beings from observing the cyclical patterns in nature--life, death, and rebirth.  If each spring brings renewal of life, then perhaps the dead can be reborn.  The evidence of burials and rituals extending back hundreds of thousands of years suggests a universal human hope that the dead might rise again.

The cause of resurrection could be either divine power or human technology.  People who have their bodies frozen hope that future technological advances will allow their bodies to be reanimated.  Some people hope that when they die, their minds can be digitally downloaded into a robotic brain that would replicate them.

In contrast to the other forms of immorality, the Resurrection Narrative accepts the reality of death.  But as Cave and other philosophers have indicated, this creates a problem.  If a human being has completely ceased to exist--a person has died, and the body has rotted or has been cremated--how can any new version of that person be assembled so that literally it's the same person coming back to life?  Why should I find it reassuring to believe that after I am completely dead, a copy of me will be built that looks, acts, and thinks like me?  Will this copy of me really be me?  Or will it be only a copy?

This is also a problem for the Biblical doctrine of the Day of Judgment.  If my resurrected self really deserves to be eternally rewarded or eternally punished for how I lived my life, then my immortal self would have to be morally identical to my mortal self.

Cave suggests that there are at least two views of how resurrection could happen.  According to the reassembly view, resurrection means that after the disintegration of the body in death, all of the original parts of the body are put back together in exactly the right order to restore the living body.  According to the replication view, resurrection requires not the complete reassembly of the material body, but rather the replication of a person's psychology--the memories, desires, and beliefs that constitute the emotional and intellectual identity of a person.

There are serious problems with both views.  First, there's the Cannibal Problem.  If you were eaten by a cannibal, how could you and the cannibal be resurrected?  If the bits of you in the cannibal's body are restored to you in your resurrection, wouldn't the cannibal's resurrected body be missing some of its original parts?  We don't have to imagine cannibalism to see the more general problem.  The human body is constantly losing some atomic bits and gaining new ones, so that these atomic bits can flow through many different bodies.  It's not clear then how all of this could be sorted out in the process of resurrection.  Augustine (City of God, xxii, 12-22) and Aquinas (ST, suppl., qq. 79-80) tried to resolve these and similar problems with the idea of resurrection.  But it's not clear that they succeeded.

Christians like Augustine and Aquinas have argued that, as indicated by Paul, our resurrected bodies will be glorified or perfected bodies that will be free of bodily defects and with no need for eating, drinking, or copulation.  But this creates the Transformation Problem.  If our bodies have to be utterly transformed in resurrection, then it seems that our resurrected bodies are not the same as our original mortal bodies.  If so, then it seems that we are not being resurrected to life, because we are actually being replaced.

The replication view of resurrection also has its problems.  Let's say that we're not resurrecting the whole body, because it's enough to replicate the right psychological blueprint--the distinctive personality of someone--regardless of the physical materials used.  So, shortly before your death, we download all of your memories, emotional dispositions, and opinions onto a digital file.  We then later upload that file into the brain of a robot.  Would that robot really be you?  Or would it be only a copy of you, not the real you?  If we uploaded the file into two or more robots, would they all be you?  Surely not, if two or more separate persons cannot be the same person.

Once we begin pondering questions like this, the whole idea of immortality through resurrection seems incoherent.

But then we might turn to the Soul Narrative as an alternative form of immortality, which we can take up in the next post.


Tuesday, October 29, 2013

Do We Really Want to Live Forever?

It's easy to believe in immortality, but only as long as one doesn't think much about it.

That's the conclusion that I draw from Stephen Cave's book Immortality: The Quest to Live Forever and How It Drives Civilization (Crown Publishers, 2012), which is a thoughtful survey of the various ways to immortality, of the reasons why they are all illusions, and of the reasons why we should accept our human mortality without being crippled by fear of death.  Cave has drawn much of his reasoning from two books--Corliss Lamont's The Illusion of Immortality (first published in 1935) and Immortality (a collection of writing on the subject edited by Paul Edwards and first published in 1997). Cave's book helps me to extend and clarify what I have said in previous posts about the Darwinian assessment of the human longing for immortality. 

Some of my critics (Peter Lawler, for example) have argued that Darwinian science cannot account for the uniquely human anxiety about death and yearning for immortality.  Darwin freely admitted "that no animal is self-conscious, if by this term it is implied, that he reflects on such points, as whence he comes or whither he will go, or what is life and death, and so forth" (Descent of Man, Penguin Classics, 105).  And yet he thought that this uniquely human self-consciousness in reflecting on the meaning of life and death could be explained as emerging from the natural evolution of human cognitive capacities. 

As Cave explains it, we can see the evolutionary advantages of human self-consciousness and imagination, which promote an intense concern for our self-preservation and well-being while picturing all the threats to our lives that might lie in the future, which thereby allow us to plan how best to protect our existence.  But excessive concern for the self and excessive concern with future threats can lead us to obsess about our mortality and to fantasize about immortality in ways that blind us to the fact that worrying about death is foolish.

To reach this conclusion, we need to see that all of the ways to immortality are nonsensical and that there is wisdom in accepting human mortality.  Cave distinguishes four ways to achieve immortality and one way to achieve a wise acceptance of mortality.  He shows that all four of the ways to immortality were imagined in ancient Egypt and that the sensible acceptance of mortality is evident in a tradition of "wisdom literature" that began in ancient Sumer.

STAYING ALIVE
The first way to immortality is what Cave calls the Staying Alive Narrative.  We could become immortal if we could put off aging and death indefinitely.  In every civilization, there are stories about the search for the elixir of life--for some substance or technique that would keep one alive forever.  In the Gilgamesh Epic--perhaps the oldest piece of literature that has survived--Gilgamesh goes on a quest for the secret of immortal life.  In ancient Egypt, there was an elaborate system of medicine and magic directed to prolonging life and slowing aging for as long as possible.  This quest for staying alive forever continues today in the hopes of those who believe that modern science will inevitably allow us to extend life to the point of conquering death, which was one of the promises of early modern science coming from people like Francis Bacon and Rene Descartes.

This will to live forever is rooted in our evolutionary nature.  Like all living beings, human beings have evolved to preserve and reproduce themselves.  Human beings are unique, however, in that their evolved cognitive abilities for conceptual abstraction, cultural learning, and imaginative projection into the future allow them to imagine how they might achieve immortality.

While most of us today recognize the futility of the ancient quest for staying alive forever, some of us see evidence that modern science will eventually succeed in this quest.  First of all, it is evident that modern science has already extended our average lifespan.  Over the past two centuries, the scientific understanding of microbial infection has brought improvements in sanitation and medical vaccination, and the discovery of antibiotics has allowed us to fight infectious diseases.  As a consequence, life expectancy doubled in one century.  Recently, life expectancy in the most developed countries has been increasing by a few years each decade.  Projecting this trend into the future seems to some people to be the scientific way to immortality.

And yet there are lots of scientific reasons for thinking that we might be reaching the natural limit of the human lifespan.  Medical improvements might eventually allow most human beings to live to something around 122 years, which is the longest lifespan of any person on record (the Frenchwoman Jeanne Calment).  But this is not immortality.

As I argued in Darwinian Conservatism, senescence--the process of bodily decay at older ages--is probably so deeply rooted in the adaptive complexity of our bodies as shaped by natural selection that it cannot be abolished by biotechnological changes.  It is likely that aging is controlled by so many genes interacting in such complex ways that it would be hard to eliminate the genetic mechanisms for aging, and thus to greatly lengthen the life span, without disrupting other beneficial mechanisms.

A few years ago, 51 leading scientists who study aging published a statement in Scientific American declaring that there was "no truth to the fountain of youth."  They reasoned that "it is an inescapable biological reality that once the engine of life switches on, the body inevitably sows the seeds of its own destruction."  Since there is no scientifically proven way to change the process of aging, "the prospect of humans living forever is as unlikely today as it has always been."

One plausible evolutionary explanation for senescence has been offered by biologist George Williams.  Genes commonly  have more than one effect.  A gene might confer great benefits at young ages but have such harmful effects in old age that few people could live past 100.  In the environments of evolutionary history, most people probably died (from accidents and other causes) long before they could even get close to age 100.  In those conditi9ns, this gene would spread by natural selection because people would enjoy its beneficial effects in youth, in ways that would enhance their reproductive fitness, while few people would have lived long enough to experience the gene's bad effects.  The accumulation over evolutionary history of such genes that are beneficial in youth but harmful in old age might explain the aging process.  The general idea is that the evolutionary economy of nature works on the principle of trade-offs between costs and benefits.  To get youthful energy, we must accept senescent decline.  Williams suggested that we should find consolation in the thought that "senescence is the price we pay for vigor in youth."  Instead of longing to live forever, we should live the life we have as fully as we can until we reach our completion.

We cannot say that it is absolutely impossible for science to extend the human life span indefinitely, but we can say that this is highly unlikely.

Even if the indefinite extension of healthy life were made possible by medical means, this would not give us immortality, because we would still be subject to accidental causes of death.  A few years ago, aging researcher Steven Austad calculated that if we were free from aging and disease, our average lifespan would be 5,775 years.  He did this by extrapolating from the survival rates for nine-year-olds in the United States, because they are least likely to die from illness.  Living for five or six thousand years might sound pretty good, but it's not immortality.

If keeping our bodies alive forever seems improbable, then we need a backup plan.  The ancient Egyptians carefully preserved corpses through mummification so that they could reanimated in the afterlife.  This is the second way to immortality--the Resurrection Narrative--which became part of the Christian tradition through the influence of Paul in the New Testament.

To be continued . . .

Sunday, October 20, 2013

A Complete Adult Hominid Skull from the Early Pleistocene



The current issue of Science (October 18) has articles reporting the discovery in the republic of Georgia of the first complete adult hominid skull from the early Pleistocene.  Images of the skull can be seen in a slideshow.  John Noble Wilford has a good article on this in the New York Times.




Dated at about 1.8 million years ago, this is one of five skulls found at the Dmanisi site in Georgia.  All five individuals lived at about the same time.  "Skull 5" is the best preserved skull, and it probably belonged to an elderly male with worn front teeth.  He had a small brain of 546 cubic centimeters, compared with an average 1350 cubic centimeters for modern humans.  The other skulls probably belonged to two mature males, a young female, and an adolescent of unknown sex.



What is most fascinating about these five skulls is that they are as variable as African fossils that have been classified in three different species--H. erectus, H. habilis, and H. rudolfensis.  David Lordkipanidze and his coauthors argue that the five skulls from Dmanisi belong to one species--H. erectus--and that the early Homo fossils in African also belong to one lineage of human ancestors.  If this is true, this would radically alter the common view of human evolution.

What does this tell us about the evidence for human evolution in the Pleistocene?  First, it shows us how limited the human fossil record is, because here we see that there is only one fully preserved adult hominid skull from the early Pleistocene. 

But, second, this also shows that evidence--although limited--really does matter, and that the evidence can be used to falsify hypotheses about human evolution.  For example, there is a well-preserved early Homo maxilla (upper jaw) A.L. 666-1 from Hadar, Ethiopia, dated at about 2.33 million years ago, which seems to be the best evidence for early Homo before 1.90 million years ago.  Recently, however, Robyn Pickering and others have questioned the status of A.L. 666-1, and they have proposed Australopithecus sediba fossils from the site of Malapa in South Africa dated at 1.977 million years ago as the best evidence for the earliest ancestor of the Homo lineage.  Now, Lordkipanidze and his coauthors argue that the similarities between A.L. 666-1 and skull 5 from Dmanisi effectively falsify this claim for A. sediba.

Undoubtedly, this new evidence will not resolve the continuing controversy among evolutionary scientists about the precise path of human ancestry in the Pleistocene.  But still this controversy will be constrained by the available fossil evidence and by any new evidence that might turn up in the future.

By contrast, the creationists and the intelligent design theorists have no fossil evidence for any alternative to the Darwinian account of human evolution.  Where's the fossil evidence for the supernatural intervention of a creator or an intelligent designer in forming Homo sapiens?  If the theories of special creation or intelligent design are not falsifiable through evidence, then they are not really scientific theories at all.

In 1996, Pope John Paul II said that the Catholic Church could fully accept the truth of Darwinian evolution, but with one qualification--the creation of the human soul required an "ontological leap" that could only come from a miraculous intervention by God.  Is there any fossil evidence for this?  Why should we assume that God was unable or unwilling to allow a natural evolutionary process to create the human soul?

Can't we assume that the evolution of the human soul depends on the evolutionary increase in the size and complexity of the primate brain?  Isn't that why the discovery of hominid skulls is exciting, because it allows us to estimate the size and complexity of the brain?  If so, then the theistic evolutionist can see God's creation of the human soul through the natural evolution of the primate brain.

Darwin allowed for such theistic evolution by speaking of natural evolution through "secondary causes."


REFERENCES

William H. Kimbel, "Hesitation on Hominin History," Nature 497 (May 30, 2013): 573-74.

David Lordkipanidze, et al., "A Complete Skull from Dmanisi, Georgia, and the Evolutionary Biology of Early Homo," Science 342 (October 18, 2013): 326-31.

Robyn Pickering, et al., "Australopithecus sediba at 1.922 Ma and Implications for the Origins of the Genus Homo," Science 333 (September 9, 2011): 1421-23.

Friday, October 11, 2013

God and Evolution at Lone Star College

The debate over the moral and religious implications of Darwin's theory of evolution continues to stir deep emotions in young people. 

I saw that when I lectured yesterday at Lone Star College-Kingwood in Houston.  I met with some faculty members in the afternoon for a discussion of my Darwinian Conservatism book.  I then lectured in the evening on "Does Darwin Subvert or Support Morality?"  There was a standing-room-only audience of 200 or more for the lecture, and we had a lively discussion after the lecture. 

I was very pleased to meet Kent Guida there.  We have been--as he says--"pen pals" for over 14 years, but we had never met.

I was impressed by the intellectual energy of both the faculty and the students at Lone Star.  They represented many disciplines--history, literature, philosophy, political science, psychology, and others--and they were able to speak with one another in a vigorous and thoughtful way.  At Lone Star, the faculty encourage this kind of interdisciplinary discussion by adopting one book that is to be discussed across the courses for a year. This is what every program in liberal education should be doing, but it's rare.

Another example of this stimulating interdisciplinary work at Lone Star is the course being taught this semester by John Barr on comparing Abraham Lincoln and Charles Darwin.

In the small group discussion with faculty members, one objection to my argument for Darwinian conservatism was that I was attacking a "straw man" in dismissing the tradition of the left as "utopian" in contrast to the "realism" of conservatism.  I admitted that this is one of the best criticisms of my argument.  I think it is true that the history of leftist thought has been largely utopian in aspiring for a total transformation of the human condition to achieve an equality of outcome in social life.  But in recent decades, especially since the collapse of the Marxist regimes in 1989, the left has largely given up its utopian aspirations as it has settled into a conservative left proposing moderate social reforms.  So, for example, socialists no longer argue for public ownership of the means of production or for confiscatory tax rates for the rich.  Similarly, as I have indicated in Darwinian Natural Right, the Israeli kibbutzim were originally utopian in trying to abolish private families and private property for the sake of complete equality, but now the kibbutzim have largely thrown out these radical aims because the people in the kibbutzim found them unbearable.  For me, this shows that the emotional costs of utopian projects are too high when they deny our evolved natural desires.

Clearly, the primary reason for the large turnout of students at the lecture was the interest in my argument for applying Darwinian reasoning to morality and religion.  Many of the students at Lone Star are serious Christians who have adopted a Biblical creationism as an alternative to Darwinian evolution.  Some of them were homeschooled by their parents, so that they could be taught Biblical science rather than evolutionary science.  But there were also some students who see no necessary conflict between their Christian beliefs and accepting Darwin's theory.  And there are others who are skeptics or atheists who embrace Darwinian science as supporting human morality without any need for appeal to the divine.

I criticized both "scientific creationism" and "intelligent design theory" for not explaining exactly where, when, and how the Creator or Intelligent Designer created all forms of life.  Actually, as some students pointed out to me, the creationists who defend a literal six-days-of-creation Biblical story actually do propose a potentially testable theory.  But do they really believe that we should be able to find evidence that all species of life were created in six days?  If so, then we should see evidence that human beings existed contemporaneously with dinosaurs and many other species of life that seem to have gone extinct before the emergence of Homo sapiens. 

The primary concern of the students, however, was not so much the scientific debate over origins but the moral debate over whether a Darwinian account of the evolution of the moral sense could sustain moral order based on a purely natural ground without any necessary appeal to supernatural religious doctrines.  My argument was that while religious belief could often provide helpful support for our morality, it was not absolutely necessary, and that when religious traditions cannot resolve great moral issues--like slavery, for example--we must turn to our evolved natural moral sense.

Some of the students thought that relying on the purely human grounds of morality--human nature, human culture, and human judgment--deprived us of any "objective" standard of right and wrong that could only come from the divine law of the Bible.  I used the illustration of the debate over slavery to suggest that sometimes the Bible is either unclear or unreliable in its moral teaching, which forces us to pass the Bible through our naturally evolved moral sense.  Before the Civil War, many American Christians read the Bible as supporting slavery as a dictate of divine law.  After the Civil War, most Christians assumed that the Bible clearly condemns slavery as wrong, despite the fact that all of the passages of the Bible specifically on slavery seem to support it.

In the discussion, I indicated that I have argued for introducing this debate into our high school and college classrooms.  Why shouldn't high school biology students be able to read texts representing all sides of this debate so that they can make up their own minds based on their weighing of the evidence and arguments?  Why shouldn't all students have the sort of open discussion that we had at Lone Star College? 

If they did have such discussions, students might discover the inescapability of the Reason/Revelation debate, and the difficulty--perhaps impossibility--of either side in that debate refuting the other.  Unfortunately, I suspect that most high school and college teachers are not prepared to handle such a discussion.  But it's good to see that such a discussion is possible at places like Lone Star College.

A few of my many posts on related themes can be found here, here, here, here, here, here, here, here., and here.

For the opponents of evolution, a good theme song might be Tecumseh Fitch's "I Don't Believe in Evolution"

Monday, October 07, 2013

Kirk's Permanent Things and Hayek's Enduring Things

The Philadelphia Society's fall meeting was on "The Permanent Things".  The panels were designed to celebrate the 60th anniversary of the publication of Russell Kirk's The Conservative Mind in 1953.  "The permanent things" was a phrase that Kirk took from T. S. Eliot, and much of the discussion turned on the interpretation and assessment of that idea.

Almost all of the speakers were uncritical in their praise of Kirk.  The only critics were me and Alan Charles Kors.  Kors is a prominent historian at the University of Pennsylvania who is known for his studies of the intellectual history of seventeenth and eighteenth century Europe.  Kors defended the French Enlightenment, which was a bold move before an audience of Kirkian conservatives.  I defended Friedrich Hayek's evolutionary conservatism as an alternative to Kirk's metaphysical conservatism.

Kors and I were in complete agreement.  That might seem odd, particularly since Hayek presented his Burkean evolutionary conservatism as rooted in the British and Scottish Enlightenment as opposed to the French Enlightenment. 

But Kors argued that what Burkean conservatives criticize as the excesses of the French Revolution--the Jacobins and the Reign of Terror--manifest the intellectual legacy of Rousseau rather than the philosophes.  After all, Rousseau was a vehement opponent of the philosophes.  Leaders of the French Enlightenment like Voltaire and Montesquieu opposed every form of despotism and supported the tolerance, liberty, and commercial spirit that they saw in Great Britain.  Robespierre's "Republic of Virtue" was inspired not by the thought of Voltaire or Montesquieu but by Rousseau's Social Contract.  For example, Robespierre's "Religion of the Supreme Being" was explicitly an attempt to enforce Rousseau's teaching that all citizens must embrace a deistic religion, and that neither atheists nor Christians can be true citizens.

In his speech, Kors often referred to Hayek in ways that suggested that most of French Enlightenment thought was in agreement with Hayek's evolutionary liberalism, which I defended in my speech.

In arguing that Hayek's evolutionary reasoning was superior to Kirk's metaphysical reasoning, I faced a problem that was implicit throughout this conference:  it is hard to develop any intellectual assessment of Kirk's thought, because he was not a moral or political thinker but rather a man of poetic imagination.  This was indicated in the first speech of the conference by Bradley Birzer of Hillsdale College.  Having recently  completed a biography of Kirk based on his study of Kirk's papers and correspondence, Birzer was in a good position to offer a general view of Kirk's work.  His main conclusion was that Kirk was not a political philosopher who developed his reasoning in a logically rigorous manner, because he was actually a literary or historical story-teller.  The appeal of Kirk comes from the imaginative, evocative style of his writing, not from any rigorous argumentation.  Birzer even admitted that Kirk often contradicted himself.  But still Birzer tried to defend Kirk's thought as coherent.

In one of the first critical reviews of The Conservative Mind--in The Freeman (July 1955)--Frank Meyer complained that Kirk's conservatism was "not a body of principles, but a tone, an attitude," an attitude of reverence for the wisdom of tradition.  Like Hayek, Meyer agreed that human reason needed to operate within tradition; but he also thought that we need to appeal to some rational principles in judging tradition.  After all, what we inherit by tradition is often in need of reform, and some traditions are so bad as to be tyrannical.  Occasionally, Kirk conceded this, but then he would never explain how this reformation of tradition was to be carried out.

So, for example, as Kors indicated in his speech, the oppressive traditions of the ancient regime in France needed to be overturned.  And those like Montesquieu and Voltaire offered arguments for prudent reforms that would move France towards the free, prosperous, and tolerant society that was emerging in Great Britain.

Similarly, I argued in my speech that the American Southern tradition of slavery was so obviously oppressive that it needed to be abolished.  Kirk was unclear about this.  He did say, in The Conservative Mind, that Southern conservatives were mistaken in founding their conservatism on slavery.  But he didn't explain why he thought they were mistaken.  Moreover, while Kirk appealed to a metaphysics of the "permanent things"--a transcendent moral order of divine law--as the first principle of conservatism, he did not confront the fact that the Southern Christian conservatives grounded their defense of slavery in the divine law of the Bible.

Like Kirk, Richard Weaver insisted that any healthy culture had to be grounded in a "metaphysical dream of the world."  Weaver also saw that the "metaphysical dream" of the Southern slaveholders sanctified slavery as conforming to the Bible.  And he offered no way to judge that Southern tradition as wrong.

My argument was that our evolved human nature supports a natural moral sense that allows us to recognize that the tradition of Biblical religion was wrong in sanctioning slavery, because we can see that slaves are not naturally adapted to their slavery, and that they will resist enslavement as exploitation or social parasitism.  We can then pass the Bible through the filter of our natural moral sense and eliminate those Biblical traditions that we know to be evil.

Some of the speakers at this conference--such as David Jeffrey--suggested that the only source of moral authority was divine law, and particularly the divine law of the Hebrew Bible.  It was not clear to me, however, that Jeffrey and others really believed that we should embrace all of the Mosaic law, which would require a brutal despotism.

There was a disturbing tendency for some of the speakers to imply a divine command theory of moral order--that there is no way to know right from wrong except by obeying whatever God commands.  One speaker even appealed to Kierkegaard as correctly seeing that all standards come from divine authority.

Frederick Ross made the same kind of argument in his Slavery Ordained by God (1857).  He insisted that what is right and wrong is determined completely by God's command.  Therefore, if the Bible sanctions slavery, it must be right; and the abolitionist argument that slavery is wrong because it violates a natural standard of justice is atheism and blasphemy.

Surely, the Kirkian conservatives would want to reject this.  But to do that, they would have to appeal to some natural moral experience that allows us to correct the mistakes that can arise in traditions of divine law. 

Tuesday, October 01, 2013

Hitler's Philosophers: Reductio ad Hitlerum?

In his book on Leo Strauss, Will Altman points out that Strauss coined the phrase reductio ad Hitlerum.  In Natural Right and History, Strauss remarked: "we must avoid the fallacy that in the last decades has frequently been used as a substitute for the reductio ad absurdum: the reductio ad Hitlerum.  A view is not refuted by the fact that it happens to have been shared by Hitler" (42-43).

We might wonder, as Altman suggests, why Strauss didn't identify this fallacy as a noble fallacy.  Surely, it's good that our disgust with Hitler and the Nazis is so deep that we think a view is refuted by the fact that it happens to have been shared by Hitler.

But when we do this, we have to identify those views that were most directly responsible for the evils of Hitler's regime, and we have to understand the history of those views and their adoption by Hitler.  A view is not refuted by the fact that it happens to have been professed in a perversely distorted form by Hitler.

I thought about that while reading Yvonne Sherratt's new book Hitler's Philosophers (Yale University Press).  She sets out to tell "the story of Hitler's philosophers," and thus "the story of how philosophy was implicated in genocide" (xvi-xvii).  Contrary to what she asserts, she is not the first person to tell this story, as should be clear by the many citations in her notes to other books on the history of philosophers connected to Hitler.  But she is the first person to tell this story as a "docudrama" in a "narrative style," which doesn't work for her, because she has little skill for narrative writing.

She claims that this is "a work of non-fiction, carefully  researched, based upon archival material, letters, photographs, paintings, verbal reports and descriptions, which have all been meticulously referenced" (xx).  In fact, her research is remarkably sloppy.  For example, she repeatedly claims that Hitler "came to regard himself as the 'philosopher Fuhrer'" or "philosopher leader" (xviii, 16, 31, 35, 63, 127).  If Hitler actually called himself the "philosopher Fuhrer," as she implies, that would be something worth knowing.  But if you look for a citation for this claim, you will find only one reference (16, 267, n. 41) to Ian Kershaw's Hitler 1889-1936: Hubris, page 250.  If you then check Kershaw's book, you won't see any indication that Hitler ever called himself the "philosopher Fuhrer," although Kershaw does quote Hitler as saying: "the combination of theoretician, organizer, and leader in one person is the rarest thing that can be found on this earth; this combination makes the great man" (252). 

Here Kershaw is quoting from Mein Kampf (trans. Ralph Manheim, 1943, pp. 580-81).  In Mein Kampf, Hitler identifies the "theoretician" as the man who grasps the "ideas" that constitute the "worldview" (Weltanschauung) necessary for a political party like National Socialism that will bring a total cultural transformation (pp. 452-62).  As an example of such leadership in cultural transformation, he mentions the triumph of Christianity over paganism as "the first spiritual terror," and he suggests that National Socialism will overturn that Christian terror with a new spiritual terror to create a new cultural regime (pp. 454-55).  He also mentions Martin Luther, Frederick the Great, and Richard Wagner as examples of "great reformers" of culture (p. 213).

Sherratt claims that Martin Heidegger was identified as "Hitler's Superman" (104-107).  But she never provides any citation with evidence that Heidegger or anyone else identified him as "Hitler's Superman."  It is true, however, that Heidegger in his Rectoral Address at the University of Freiburg did speak of his "spiritual/intellectual leadership" (geistige Fuhrung) over Germany.  And he told Karl Jaspers that he wanted to "lead the Leader."  In Mein Kampf, Hitler spoke of the need for geistige Fuhrung (p. 457); and so Heidegger could have picked up this phrase from Hitler's book.

Despite the sloppiness of her research, Sherratt does provide a comprehensive history of the connections between Hitler and German philosophy.  First, she surveys Hitler's references to German philosophers like Kant, Fichte, Hegel, Schopenhauer, and Nietzsche.  Then she claims to find in those philosophers evidence that Hitler's reading of them was valid, because one can see in them many of the elements of Hitler's Nazism--anti-Semitism, German nationalism, militarism, racism, and eugenics.  She then surveys the history of those many philosophers in Germany who collaborated with Hitler and the Nazis--Heidegger, Carl Schmitt, and many others. 

She also provides a history of Hitler's philosophical opponents who were forced to leave Germany--particularly, Walter Benjamin, Theodor Adorno, and Hannah Arendt.  And she tells the remarkable story of Kurt Huber and the White Rose resistance movement.  As a professor of philosophy at the University of Munich, Huber became the only academic philosopher in Germany to resist Nazism.  He joined with a group of his students at the University--led by Sophie Scholl and her brother Hans Scholl--who wrote and distributed a series of leaflets in 1942-1943 calling for nonviolent resistance to the Nazi regime.  They were arrested and convicted of treason.  Some of them--including Huber--were executed by beheading.  (The texts of their leaflets can be found online.)

Finally, Sherratt surveys the history of how most of the Nazi philosophers managed to avoid any severe punishment after the war and to even resume their academic careers in Germany.  Heidegger was especially despicable in lying about his active support of the Nazis, and even claiming that he supported the White Rose resistance, so that he could return to the University of Freiburg and eventually regain his high position, even to the point of being praised by some academic philosophers as the greatest philosopher of the twentieth century.

This history is helpful to anyone who wants to make an intellectual assessment of the connection between philosophy and the evils of Nazism.  But Sherratt's own intellectual assessment of the issues raised by this history is shallow and confused.  For example, she points to Kant's anti-Semitism as showing that Hitler was correct to identify him as a forerunner to Nazism (36-41).  But then she notes that Huber read Kant very differently from how Hitler read him: "Kant, in fact, was one of Huber's main weapons in his intellectual resistance to Nazism, and Huber lectured on him as often as he could" (215).  She makes no attempt to decide whether Huber's reading of Kant was better than Hitler's.

Moreover, she never tries to identify clearly the philosophical mistake that leads philosophers like Heidegger to support evil tyranny like that of the Nazis.  She might have done this if she had carefully read Nietzsche.  As is usually the case for scholars looking at the connection between Nietzsche and Nazism, she skims over some ideas in Nietzsche's early and late writings, while giving his middle period only one sentence: "Nietzsche explored many ideas throughout his life including, in his middle period, rationalism and Enlightenment philosophy" (48).

If she had actually read Nietzsche's Human, All Too Human, Sherratt might have noticed that Nietzsche identified there the philosophical mistake that can lead philosophers to support tyranny.  She might also have noticed that it was Nietzsche's adoption of Darwinian evolutionary science that allowed him to see that philosophical mistake.

Evolutionary science allowed Nietzsche to see that social order arises best as a largely spontaneous order of human cultural evolution that does not require any intelligently designed metaphysical order enforced by state coercion.  This leads Nietzsche to embrace the liberal separation of culture and the state, so that the purpose of the state is only to protect individuals from one another, and then cultural life is a realm for freedom of thought and action. 

By contrast, Nietzsche saw, philosophers like Plato and Heidegger who want the state to execute their "spiritual leadership" in enforcing their vision of metaphysical order become "tyrants of the spirit," and consequently they are easily seduced by tyrants like Hitler.  (See Human, All Too Human, 1-9, 235, 261, 438-41, 465, 472, 474.)

Unfortunately, Nietzsche moved away from the Darwinian aristocratic liberalism of his middle period to the Dionysian aristocratic radicalism of his later writings.  And it was the latter that inspired Hitler and the Nazis.

Some of these points are elaborated in other posts here, here, here, here, here, and here.