Saturday, May 26, 2018

Does IQ Inequality Deny the Declaration of Independence?

When I first began thinking as a college student about how a Darwinian science of evolved human nature might provide the foundation for political philosophy, I noticed that such thinking provoked vehement scorn in the academic world; and I worried that pursuing this line of thinking would make it impossible for me to have a successful academic career.  But now as I look back over my lifetime, and see how this debate over Darwinian social science has changed, I am astonished at how much the debate has shifted in favor of the Darwinian position, because the weight of the accumulating evidence supporting Darwinian social science has become too great to ignore.  One of the most dramatic examples of this is the history of the debate over the science of intelligence as measured by IQ.

In 1969, in my junior year at the University of Dallas, Arthur Jensen published a long article in the Harvard Educational Review--"How Much Can We Boost IQ and Scholastic Achievement?"  He answered his question in the first sentence: "Compensatory education has been tried, and it apparently has failed."  In the United States and elsewhere, children from lower class families were not as successful on average as children from higher class families.  The lower class children seemed to be less intelligent on average, as measured by their low scores on IQ tests.  Since it was commonly assumed that human intelligence, like most other human capabilities, was shaped mostly, if not entirely, by the social environment, public policy makers believed that if lower class children were given advanced educational opportunities at an early age (such as the Head Start program in the U.S.), this would raise their intelligence so that they would show the same scholastic achievement as the upper class children.  Jensen's survey of the evidence that this had failed, and that the failure was due to genetically innate differences in intelligence that could not be easily changed by environmental factors, provoked outrage: he was denounced as a racist and a fascist.  His teaching and his lectures were disrupted by violent protests, and many people demanded that he be fired from his job at the University of California-Berkeley.  Jensen had provoked this anger because he had challenged the egalitarian claim of left liberalism that human beings are born with equal capacities that can be cultivated in any direction by the social environment of their early childhood.

In September of 1971, when I was beginning my graduate work at the University of Chicago, Richard Herrnstein published an article in The Atlantic entitled "I.Q."  Two years later, he expanded his article into a book--I.Q. in the Meritocracy.  Like Jensen, he argued that while general intelligence (g) as measured by IQ tests was shaped by both genes and environment, the variation in intelligence was due mostly to genes--perhaps as much as 80%. Moreover, he claimed that in modern liberal societies, which strive to remove the social and legal obstacles to social mobility, actual social mobility would be blocked by the innate human differences in intelligence.  When people are free to rise and fall by their own merit, they will sort themselves out according to their innate differences.  So societies that increase equality of opportunity for everyone will inevitably produce an unequal class structure where the smartest people will be the ruling class.

This tendency to meritocracy with a cognitive elite is strengthened by the growing complexity of modern societies in which the most highly paid and prestigious occupations require people who can handle cognitively challenging tasks, so that high IQ is correlated with economic success.  Thus, the class structure in an open liberal society will be built on natural human inequalities.

Herrnstein put his argument into the form of a syllogism:

1. If differences in mental abilities are inherited, and
2. If success requires those abilities, and
3. If earnings and prestige depend on success,
4. Then social standing (which reflects earnings and prestige) will be based to some extent on inherited differences among people (I.Q. in the Meritocracy, 198-199).

Herrnstein thought this had profound implications for political philosophy, because it refuted "the egalitarian society of our philosophical heritage" (221), a heritage that included not only Marxism but also the Declaration of Independence.  Both the Communist Manifesto and the Declaration of Independence had affirmed the "vision of a classless society," but Herrnstein seemed to show that we were not moving to a classless society.  If he was right, then the arbitrary barriers to social mobility in a traditional aristocracy will be replaced by the biological barriers to social mobility in a modern meritocracy.

This bothered me because I was not willing to give up on the Lockean liberal principle of equal liberty as expressed in the Declaration of Independence.  I wondered whether there could be a Darwinian defense of this principle.

But while I was open to Herrnstein's reasoning, it seemed that most people in the academic world were not.  Like Jensen, he was subjected to angry persecution.  As a result of this, the scientific study of intelligence became a taboo subject.  Only a few people continued this research, and it was often hard for them to find the necessary funding.

Then, in 1994, the controversy was reignited by the publication of The Bell Curve: Intelligence and Class Structure in American Life, coauthored by Herrnstein and Charles Murray.  Herrnstein died before the publication of the book, so Murray was left to face the vitriolic attacks that it elicited.  As usual, he was denounced as a racist and a fascist.

The mob violence against Murray last year at Middlebury College shows that the Darwinian science of intelligence is still taboo for many professors and students.  And yet, it seems to me that in general the angry resistance is not as great as it once was, because the research on the genetic basis of intelligence has become so impressive that it has to be taken seriously.

Perhaps the best recent survey of that research is Richard Haier's The Neuroscience of Intelligence (Cambridge University Press), published last year. Haier shows the overwhelming evidence that has accumulated over 40 years supporting the genetic basis of intelligence.  He stresses the most impressive evidence coming from neuroimaging that now allows us to see how IQ scores are correlated with the structure and functioning of the brain, which has been Haier's area of research.

He shows how the correlations among mental tests point to the existence of an underlying general factor of intelligence that is called g.  People who do well on one test tend to do well on other tests.  This holds for tests of reasoning, spatial ability, memory, processing speed, and vocabulary.

He also shows that these tests have great predictive validity.  High IQ scores at an early age predict educational achievement, professional success, income, and healthy aging.  He also emphasizes the importance of general intelligence for everyday life.  The complexity of everyday life is challenging, and people with low IQs are less successful in managing the challenges of life.  For example, we can compare low and high IQ groups--the low having IQs of 75-90, the high having IQs of 110-125.  People in the low group are 133 times more likely to drop out of high school, 10 times more likely to be a chronic welfare recipient, 7.5 times more likely to be incarcerated, 6.2 times more likely to live in poverty, and 3 times more likely to die in a traffic accident.  People who are not smart have a hard time navigating their way through the complex cognitive challenges of everyday life.  It really is better to be smart.

The twin and adoption studies of intelligence consistently show that genes cannot account for 100% of the variance.  So there are environmental factors involved.  But then the problem is estimating the relative contributions of genes and environment.  Different studies give different proportions, with the most common view being about 50-50.  The explanation for these different outcomes might be the age at which the twins are tested, because the heritability of IQ in identical twins increases with age--from about 30% at age 5 to over 80% starting at age 18.  So for young children environmental factors explain most of the variance, while for older children genes explain most of the variance.  That's why enhanced educational programs for young lower class children do sometimes raise their IQ scores for a few years, but then this improvement disappears as they grow older.

That such IQ differences are rooted in our evolutionary history is indicated by the fact that other mammalian animals also show IQ differences.  Studies of genetically diverse mice learning various kinds of tasks show a g-factor intelligence.  Mice show a bell curve of individual differences, so that some mice are innately smarter than others as shown in their diverse learning abilities (Matzel et al., 2003, 2013).  Similarly, chimpanzees show individual variability in heritable intelligence (Hopkins et al., 2014).  We might explain this through the "social brain" hypothesis: for animals that live in complex societies, there is an evolutionary pressure favoring the cognitive ability to navigate through a complex social world.

But the most impressive recent evidence confirming the evolved biological nature of intelligence comes from improvements in the technology of neuroimaging that allow us to see the structural and functional patterns in individual human brains that are correlated with intelligence.

For centuries, scientists have tried to correlate brain size and intelligence, with the thought that bigger brains allow higher intelligence.  Now we know from many MRI studies, that there is indeed a correlation between brain size and intelligence test scores, although the correlation is modest--average correlations ranging from .22 to .40 (McDaniel, 2005).

Another general conclusion from neuroimaging studies is that all brains do not work in the same way. Every individual brain is different, and the patterns differ according to age and sex.  Young brains operate differently from old brains.  And male brains operate differently from female brains.  There are differences in the density and organization of the white matter fibers that connect the areas of the brain.  There are also differences in amount of gray matter (the clusters of neurons) in different areas of the brain.

Amazingly, these individual differences are so distinctive that fMRI imaging can identify the unique pattern of connectivity among brain areas n an individual brain as a kind of brain fingerprint.  And these brain fingerprints can predict intelligence (Finn et al., 2015).

Neuroimaging has also supported the general conclusion that intelligence is not concentrated in one part of the brain, such as the frontal lobes.  Rather, intelligence is correlated with a distributed network of different areas of the brain.  Haier has concluded that the brain areas connected with intelligence are mostly concentrated in the parietal and frontal areas, which are areas associated with memory, attention, and language.  So he has defended a "Parietal-Frontal Integration Theory" of intelligence (Jung and Haier 2007).

Haier concludes that all of this research supports Herrnstein's original claim in 1971: a liberal society that removes the legal and political obstacles to social mobility will allow the biological differences in intelligence among individuals to be expressed in a class structure of meritocracy based on innate intelligence with a cognitive elite at the top.

Against this conclusion is all of the research that apparently shows that it's not genes but social-economic status (SES) that determines social success or failure.  The children of parents with high SES tend to be more successful than the children of parents with low SES.  The flaw in this research, however, Haier argues, is that it ignores how SES is confounded with intelligence, because SES has a strong genetic component (Lubinski, 2009; Trzaskowski et al., 2014).

To explain this point, Haier asks us to consider two alternative trains of thought.  The common train of thought about the importance of SES goes this way:
"Higher income allows upward mobility, especially the ability to move from poor environments to better ones. Better neighborhoods typically include better schools and more resources to foster children's development so that children now have many advantages.  If the children have high intelligence and greater academic and economic success, it could be concluded that higher SES was the key factor driving this chain of events."
An alternative train of thought favored by Haier and Herrnstein goes this way:
"Generally, people with higher intelligence get jobs that require more of the g-factor, and these jobs tend to pay more money.  There are many factors involved, but empirical research shows g is the single strongest predictive factor for obtaining high-paying jobs that require complex thinking.  Higher income allows upward mobility, especially the ability to move from poor environments to better ones.  This often includes better schools and more resources to foster children's development so that children now have many advantages.  If the children have high intelligence and greater academic and economic success, it could be concluded that higher parental intelligence was the key factor driving this chain of events due in large part to the strong genetic influences on intelligence" (192).
This second scenario is strengthened by the fact of assortative mating.  Over the past 60 years, very intelligent women have been able to move into high levels of advanced education and professional training--opportunities denied to women in the past.  As one result of this, many highly intelligent men and women meet in colleges and universities and marry, and then they pass on their high IQ genes to their children.  They also become "power couples" with high double-income wealth.  This is exactly the sorting out of people based on intelligence that Herrnstein foresaw.

The point here is that yes, of course, SES is an important factor in determining social and economic success; but SES includes a genetic component of innate intelligence.

This leads Haier to some disturbing conclusions that he identifies as "neuro-poverty" and "neuro-social-economic status."  Living in poverty is to some significant degree rooted in the neurobiology of low intelligence that is beyond anyone's control.  Similarly, living in the highest social and economic classes is to some significant degree rooted in the neurobiology of high intelligence that is beyond anyone's control.

There is one optimistic possibility, however.  Even though the neurobiology of intelligence is today "beyond anyone's control," because so far there is no proven scientific treatment for enhancing innate intelligence, Haier does foresee that sometime in the future, scientists might find ways to enhance intelligence through genetic engineering, drug therapy, or neuromicrochips.

But until that happens, we are left with the disturbing conclusion that many people lack the innate intelligence to be very successful in life through no fault of their own.  Some people do better than others in the natural genetic lottery, which is not based on merit.

So does this deny the principle of equal liberty in the Declaration of Independence?  How can people have equal rights to life, liberty, and the pursuit of happiness if in fact their place in the social class system depends to a large extent on their genetically inherited cognitive abilities?

In 1981, I took up this problem in the first conference paper that I wrote on Darwinian political theory.  It was entitled "Charles Darwin and the Declaration of Independence," and it was presented at the national convention of the American Political Science Association in Denver.  In 1984, a revised version of this paper was published as "Darwin, Aristotle, and the Biology of Human Rights" in Social Science Information (vol. 23, no. 3).

I argued that Darwinian biology can recognize that the equality of all human beings as possessing a common human nature is fully consistent with the inequality of human beings due to their different natural endowments.  The reality of biological species is such that members of the same species share a common nature despite their individual differences.  This is the modern biological justification for the Lockean claim that although human beings are naturally unequal in many respects, they are equal in certain rights by virtue of their human propensity to assert their right to pursue their interests in life.

The equality of rights in the Declaration of Independence is an equality of opportunity but not an equality of results.  Herrnstein was wrong to suggest that Jefferson wanted a classless society.  As Jefferson indicated, he was hope for a "natural aristocracy" of "virtue and talents" rather than an "artificial aristocracy" of "wealth and birth."  As Murray indicated in the last chapter of The Bell Curve ("A Place for Everyone"), this Jeffersonian "natural aristocracy" looks a lot like what he and Herrnstein see as a meritocracy.

I elaborated this last point in some of my previous posts on Murray, IQ, and human biodiversity here, herehereherehereherehere, here, here, here, here, here, here, here.


REFERENCES

Arnhart, Larry. 1984. "Darwin, Aristotle, and the Biology of Human Rights." Social Science Information 23: 493-521.

Finn, E. S., Shen, X., Scheinost, D., Rosenberg, M. D., Huang, J., Chun, M. M., Papademetris, X., & Constable, R. T.  2015. "Functional Connectome Fingerprinting: Identifying Individuals Using Patterns of Brain Connectivity." Nature Neuroscience 18: 1664-1671.

Haier, Richard J. 2017. The Neuroscience of Intelligence. New York: Cambridge University Press.

Herrnstein, Richard J. 1971. "I.Q." The Atlantic, September.

Herrnstein, Richard J. 1973. I.Q. in the Meritocracy. Boston: Little, Brown, and Company.

Hopkins, W. D., Russe, J. L., & Schaeffer, J. 2014. "Chimpanzee Intelligence Is Heritable." Current Biology 24: 1649-1652.

Jensen, Arthur R. 1969. "How Much Can We Boost IQ and Scholastic Achievement." Harvard Educational Review 39: 1-123.

Jung, R. E., & Haier, R. J. 2007. "The Parietal-Frontal Integration Theory (P-FIT) of Intelligence: Converging Neuroimaging Evidence." Behavioral and Brain Sciences 30: 135-54.

Lubinski, D. 2009. "Cognitive Epidemiology: With Emphasis on Untangling Cognitive Ability and Socioeconomic Status." Intelligence 37: 625-33.

Matzel, L. D., Han, Y. R., Grossman, H., Karnik, M. S., Patel, D., Scott, N., Specht, S. M., & Gandhi, C. C. 2003. "Individual Differences in the Expression of a 'General' Leaning Ability in Mice." Journal of Neuroscience 23: 6423-6433.

Matzel, L. D., Sauce, B., & Wass, C. 2013. "The Architecture of Intelligence: Converging Evidence from Studies of Humans and Animals." Current Directions in Psychological Science 22: 342-348.

McDaniel, M. A. 2005. "Big-Brained People Are Smarter: A Meta-Analysis of the Relationship Between In Vivo Brain Volume and Intelligence." Intelligence 33: 337-346.

Trzaskowski, M., Harlaar, N., Arden, R., Krapohl, E., Rimfeld, K., McMillan, A., Dale, P. S., & Plomin, R. 2014. "Genetic Influence on Family Socioeconomic Status and Children's Intelligence."  Intelligence 42: 83-88.

Sunday, May 13, 2018

Does Watching TV Make Nietzsche's Last Man Smarter?

In my last post, I commented on Ronald Beiner's Dangerous Minds: Nietzsche, Heidegger, and the Return of the Far Right.  I challenged him to present some empirical evidence supporting Nietzsche's claim that liberalism throws everyone into the degraded and spiritless life of the "last man."  In one of his comments on the post, Beiner responded with a question: "Have you watched American TV recently?" 

Of course, this is the standard response of intellectuals who insist that the cultural degradation of bourgeois liberalism is clear in popular culture--particularly, American TV.  But where's the empirical evidence to support this assertion?  We have had experience with over 70 years of regular network television broadcasting.  If Nietzsche's "last man" critique of liberalism is correct, then we could predict that there has been a steady decline in the cognitive complexity of TV programming over these years--from dumb to dumber.  This is a testable prediction.

Since I was born in the United States in 1949, just when families were beginning to purchase television sets for the first time, I grew up during the "golden age" of network television broadcasting, so I can remember "The Honeymooners," "I Love Lucy," and "The Lone Ranger."  Channel surfing today, I occasionally jump to some reruns of these original shows on Nick At Nite.  But I don't watch them for long, because they're so boring!  If I do watch them for a while, it's only to laugh at them for how dumb they were.  Haven't we all had the same experience?  Doesn't this suggest that we have become accustomed to more recent television programming that is more entertaining for us than the first shows, because the new shows are more cognitively challenging?  If so, then either we are becoming smarter, or TV is making us smarter, or both.  And if that is so, then our culture in our liberal society is becoming smarter, which contradicts the Nietzschean last man prediction of a dumbing down culture.

This subjective impressionistic evidence can be confirmed by some objective quantifiable evidence that TV shows have been increasing in their cognitive complexity.  In 2005, Steven Johnson published his book Everything Bad Is Good for You: How Today's Popular Culture Is Actually Making Us Smarter.  An excerpt from the book was published as an article in The New York Times with the title "Watching TV Makes You Smarter."  He argued that contrary to the common assumption that mass popular culture is always declining to lower standards, culture is actually becoming more cognitively demanding, as illustrated by TV programming.  He pointed out that programming on TV is increasing in its demands on our mental capacities, as indicated by increasing complexity in three elements: multiple threading, flashing arrows, and social networks.

Multiple threading refers to the multiplicity of narrative threads in a TV show.  In the 1950s, a typical episode of "Dragnet" had only one story line--crime scene, investigation, cracking of the case--with only two or three main characters.  In the 1980s, an episode of "Hill Street Blues" would have as many as 10 story lines interweaving with many primary characters; and each episode would pick up a few threads from previous episodes and leave some threads open at the end.  In contrast to "Dragnet," viewers had to mentally sort out a complex narrative structure with a complex collection of characters and a complex subject matter.  Later, shows like "The Sopranos" and "ER" became even more complex--with more simultaneous threads, more characters, and more complex subjects.

The flashing arrow is Johnson's term for what he calls narrative hand-holding.  The movie "Student Bodies" was a parody of slasher movies like "Halloween" and "Friday the 13th."  In one scene, the teenage baby sitter hears a noise, opens the door of the house, sees nothing, and then goes back into the house as the door shuts behind her.  The camera swoops in on the doorknob, and we see the door is unlocked: there's a flashing arrow on the screen and the words "Unlocked!"  That's a parody of what popular stories often do: a script inserts someone to tell the viewer some important information for the plot.  Today, popular TV shows don't rely as much on such flashing arrows, thus leaving viewers to figure out what's going on for themselves, which appeals to the mind's pleasure in solving puzzles.

The third element of growing complexity is in social networks.  Much of story telling is about exploring the complexity of social life.  How are these characters related to one another?  What is motivating them?  Are they deceived about one another?  What are their underlying strategies?  Modern TV shows increasing  force viewers to probe ever deeper social complexity to figure out what is going on.

Moreover, Johnson suggests, the greater cognitive complexity of TV shows today makes them more profitable, because now people can watch TV shows multiple times through reruns and see nuances that were not clear in the first viewing.  There are even fan sites on the Internet where fans can comment on the shows.  Think about "The Game of Thrones," for example.  Or the Socratic comedy of "The Simpsons." (Thousands of students at the University of California at Berkeley have been introduced to the history of philosophy through a course on "The Simpsons and Philosophy.")



We might explain this through an evolutionary science of liberalism.  First, we have evolved to be storytelling animals (as Jonathan Gottschall has said), because storytelling is an evolved adaptation of the human mind for mentally simulating the complex problems of social life and imagining how best to navigate through that social complexity.  Popular culture like TV is largely entertaining storytelling that appeals to that evolved adaptation.  And in the every growing social complexity of a liberal pluralist society, embracing millions of individuals cooperating and competing with one another in spontaneous orders without any central planning, the storytelling becomes ever more cognitively challenging.

Another part of this is the amazing increase in average intelligence (as measured by IQ) in liberal societies over the past hundred years, which is called the Flynn Effect (for James Flynn, who has written about it).  Apparently, liberal societies have brought increasing levels of education, and particularly the cognitive challenges of scientific education, which really has brought "Enlightenment," as people in liberal social orders have become smarter.  And this increasing intelligence has brought with it increasing moral intelligence, which Steven Pinker has identified as the "moral Flynn effect."  (I have written about his in posts here, and here.)

"Have you watched American TV recently?"  Well, yes, we might answer, and we can see evidence there that the "last man" of American liberal culture is far smarter than Nietzsche predicted.

Wednesday, May 09, 2018

Nietzsche, Nazism, and the Alt-Right: Ronald Beiner's "Dangerous Minds"

            Adolf Hitler Staring at a Bust of Friedrich Nietzsche at the Nietzsche Archives

"Hail Trump!  Hail Our People!  Hail Victory!"

This was the famous exclamation of Richard Spencer at a gathering of the Alt-Right in Washington, DC, shortly after Donald Trump's electoral victory in 2016.  Paul Gottfried coined the term "Alternative Right" in 2008.  But Spencer claims to have originated the abbreviation "Alt-Right" in that year, and he has been one of the best known leaders of the Alt-Right movement as devoted to establishing what Spencer calls the "white ethnostate" for North America and Europe.  Spencer also claims to have originated the term "ethnostate," although this seems to be a variation on what Frank Salter has called the "ethnic state."  (At the bottom of this post, I've provided links to other posts on this and related topics.)

According to Spencer, this all started with Friedrich Nietzsche.  Spencer has said: "I was red-pilled by Nietzsche."  "Red-pilled" refers to a famous scene in the movie The Matrix, in which Keanu Reeves's character swallows a red pill that allows him to see that he and all of his fellow humans have been plugged into a delusional dream, and that he must free them from their dream.  So, to "red-pill" is the slang in the Alt-Right movement that refers to the moment when people see that all the ideals of liberal democracy--equality, liberty, pluralism, and peace--are delusional, and that the true reality of life is the racial and ethnic struggle for cultural dominance.  Spencer swallowed his red pill when he began reading Nietzsche as a college student at the University of Virginia, and then later as a graduate student at the University of Chicago, he began to study Leo Strauss, who he saw as sympathetic with fascist thinking. 

What he learned was that liberal egalitarian modernity was an expression of Christian slave morality as opposed to the master morality of Greek-Roman civilization, and that this slave morality was responsible for the decadence of Western culture as promoting the dehumanizing degradation of what Nietzsche called "the last man"--the man who lives an ignoble life of safe and comfortable pleasures with no aspiration for heroic achievement.  To overcome this decadence of liberalism, we need a new nobility of elite Supermen who can create an illiberal culture of pagan master morality in which the strong rule over the weak.

Now we have Ronald Beiner's new book--Dangerous Minds: Nietzsche, Heidegger, and the Return of the Far Right (University of Pennsylvania Press, 2018)--in which he traces the intellectual history that runs from Nietzsche to Martin Heidegger to fascism and Nazism and, finally, to the recent resurgence of fascism in the Alt-Right and other illiberal authoritarian movements across Europe and Russia.

Beiner's argument for the intellectual links between Nietzsche, Heidegger, Nazism, and the newly resurgent fascist authoritarianism is persuasive.  A even more carefully detailed history of Nietzsche's place in the Third Reich is given by Steven Aschheim in Chapter 8 of his book The Nietzsche Legacy in Germany1890-1990 (University of California Press, 1992).  As Aschheim argues, it's an empirical fact of cultural history that Nietzsche was ideologically appropriated by Hitler and the Nazis as part of the official culture of the Third Reich.  But accepting this appropriation of Nietzsche by the Nazis as a fact of cultural history does not settle the question of whether their interpretation of Nietzsche was accurate or not.

Beiner rightly argues that even if the Nazi interpretation was mistaken, it was a misinterpretation that was promoted by Nietzsche himself in his most reckless writing.  Nietzsche said that the highest human being is the Dionysian artist-philosopher or Superman who exercises his will to power by tyrannically legislating new values for all of humanity.  He said that "slavery is . . . both in the cruder and in the more subtle sense, the indispensable means of spiritual discipline and breeding" (BGE, 188).  He said that the new nobility would require "merciless annihilation of everything that was degenerating and parasitical" (Ecce Homo, "Birth of Tragedy," 4).  He declared that European democracy must ultimately transform itself into "a new and sublime development of slavery," in which the "herd animal" is enslaved to the "leader animal" (Will to Power, 954, 956).  Thus, the democratization of Europe is "an involuntary arrangement for the breeding of tyrants--taking that word in every sense, including the most spiritual" (BGE, 242).  This tyrannical rule of the artist-philosophers will require "conscious breeding experiments," "terrible means of compulsion," and even "the annihilation of millions of failures."  This is necessary for the "domination of the earth" by a "new, tremendous aristocracy," in which "the will of philosophical men of power and artist-tyrants will be made to endure for millennia," and the "breeding of a new caste to rule over Europe" will unify it into "one will" (BGE, 208, 251; Will to Power, 764, 954, 960, 964).  "What is good? Everything that heightens the feeling of power in man, the will to power, power itself. What is bad? Everything that is born of weakness. . . . The weak and the failures shall perish: first principle of our love of man.  And they shall be given every possible assistance.  What is more harmful than any vice? Active pity for all the failures and all the weak: Christianity" (The Antichrist, 2).  There is plenty here to inspire Hitler, the Nazis, and the Alt-Right.

Moreover, as Beiner indicates, Nietzsche foresaw that this would happen.  In a letter (to Malwida von Meysenbug, June 1884), he wrote: "The sort of unqualified and utterly unsuitable people who may one day come to invoke my authority is a thought that fills me with dread.  Yet that is the torment of every great teacher of mankind: he knows that, given the circumstances and the accidents, he can become a disaster as well as a blessing to mankind."  Beiner asks: "Well, if Nietzsche was so terrified about this, why didn't he simply exercise more responsibility or more prudence about how he wrote?  There's no good answer to this question" (63).

But here I see the first of two weak points in Beiner's argument.  He speaks of the "insane recklessness" and "extreme lunacy" of Nietzsche's writing that attracts people like Hitler and Spencer (63).  But while Beiner sees this in the early and late writings of Nietzsche (28-34), he passes over the middle writings--particularly, Human, All Too Human and Dawn--in silence, and so he does not notice that the writings of Nietzsche's middle period do not show the "insane recklessness" and "extreme lunacy" of his other writings.

In fact, some of the Nazi writers who read Nietzsche carefully noticed that his middle writings contradicted Nazi ideology.  For example, Heinrich Hartle's Nietzsche and National Socialism (Nietzsche und der Nationalsozialismus) was an official Nazi book published by the central Nazi publishing house in 1937 and 1944.  Hartle argued that the National Socialists would have to separate those ideas in Nietzsche's books that supported Nazi ideology from those that did not; and in particular, the Nazis would have to reject the teachings in Nietzsche's middle writings that supported liberal democratic individualism rather than statist collectivist authoritarianism.

In many posts over the years, I have argued that Nietzsche's middle writings show a Darwinian aristocratic liberalism that contradicts the Dionysian aristocratic radicalism of his early and late writings, and it's only the latter that inspires the Nazis and the fascists.

In his middle writings, Nietzsche respects the freedom provided by liberal democracy, which includes freedom for "free spirits"--philosophers and scientists--to live their lives of intellectual inquiry without persecution, while also allowing the great multitude of people to live their lives free from tyrannical exploitation.  In contrast to his early and late writings, Nietzsche here sees liberal modernity as ennobling rather than degrading or dehumanizing.

Beiner ignores this, which leads him into what I see as the second weak point in his argument--he accepts the claim of Nietzsche in his later writings that liberalism necessarily leads to the decadence of the "last man," and he refuses to even consider the empirical evidence against this claim.

Beiner insists that life in liberal modernity is "profoundly dehumanizing" and "a profound contraction of the human spirit" (10).  In any liberal society, "the whole experience of life spirals down into unbearable shallowness and meaninglessness" (11).  He says that as a college student in Canada, he first read Nietzsche an "antidote to growing up amid the banality and conformism of suburban life in North America" (16).  The reason for all this degradation of life in Canada and all other liberal societies is that liberalism's "excessive openness and the exploding of fixed horizons" creates "horizonlessness" (25, 28).  Consequently, there is "a form of life where privileged horizons, horizons that sustain a definite understanding of the point of existence, have ceased to exist" (35).  This brings "spiritlessness" and "a total extermination and uprooting of culture," so that culture as such becomes impossible (30, 34, 144).  No one in a liberal society can escape this "spiritual void," because "everyone suffers from this horizonlessness" (38, 132).  So life becomes meaningless for everyone who lives in a liberal society.  It is therefore easy to understand the popular appeal of Nietzschean fascists and Nazis who offer what Heidegger called "spiritual renewal."

So while Beiner thinks that Nietzsche's "solutions" for the problem of liberal decadence are "all nonsense or lunacy," he also thinks that Nietzsche's "cultural diagnosis" of the problem is "not nonsense" (24).  This leads to Beiner's final conclusion at the end of his book: "I don't rule out the possibility that Nietzsche and Heidegger successfully articulate aspects of spiritual or cultural vacuity in the liberal egalitarian dispensation that defines modernity.  But what they offer by way of new dispensations to supplant spiritless modernity is far worse" (134). 

Well, if the illiberal alternatives to liberalism are far worse, then doesn't that mean that liberalism is better?  But how can liberalism be better if it only promotes "spiritual or cultural vacuity"?

And what should we say about poor Professor Beiner at the University of Toronto whose whole life has been meaningless because of the "spiritlessness" of Canadian liberal society?  Not only has he been forced to live the life of the "last man," he has learned from reading Nietzsche that he is a "last man" living a despicably degraded life, and so he must suffer from self-loathing.  Or does his capacity for self-loathing show that he is not a "last man"?

I don't believe that Beiner and all of his fellow Canadians have lived meaningless or spiritless lives, because I don't believe that a liberal society like Canada forces everyone to become Nietzsche's "last man."  I see no way to settle this disagreement between me and Beiner except by looking at the factual evidence of how people live in liberal societies to see if they live well or badly.  Amazingly, however, Beiner never offers any factual evidence to support his claim that everyone in a liberal society suffers from a meaningless or spiritless life.  In this way, his rhetorical strategy is exactly the same as other recent critics of liberalism--like Steven Smith and Patrick Dineen--who cite the claims of anti-liberal cultural critics that liberal bourgeois modernity is dehumanizing, and then assume the truth of those claims without considering any of the relevant empirical evidence.

In some of my previous posts, I have surveyed the empirical evidence that the Liberal Enlightenment has promoted human progress by fostering the good character--the moral and intellectual virtues or what Deirdre McCloskey calls the "bourgeois virtues"--that promote human happiness or flourishing.  For example, one can see the correlation between the Human Freedom Index and the World Happiness Report, which shows that liberal regimes tend to be high in both freedom and happiness, and the illiberal regimes tend to be low in both freedom and happiness.

In Enlightenment Now, Steven Pinker argues for the stunning success of the Liberal Enlightenment as shown by massive factual evidence (conveyed in 73 charts of statistical data) of human progress over the past 200 years: because of liberalism today more human beings are living longer, healthier, wealthier, freer, safer, more stimulating, and generally happier lives than human beings have ever lived in any time in history.

Beiner is silent about all of this evidence for the flourishing of human life in liberal societies. 

He is also silent about the evidence of social history that denies his claim that in liberal societies, it is impossible for people to live in moral communities with "horizons that sustain a definite understanding of the point of human existence" (35).  Consider, for example, the social history of voluntary religious communities like the Amish, the Hasidic Jews, or the Mormons, who have become some of the fastest growing religious groups in the United States.  Beiner suggests that the only way to have "viable horizons" is through "legislating authoritative horizons whose only authority is the act of legislation itself" (57).  But groups like the Amish illustrate how in liberal societies moral and religious horizons arise in families and voluntary associations (churches, schools, clubs, friendships, and so on) without being coercively legislated.  In liberal societies, people can always exercise "The Benedict Option" (as Rod Dreher calls it)--they can form self-governing communities of people dedicated to some shared vision of moral or religious excellence.  The importance of such character formation for liberal political theorists is evident, for example, in texts such as John Locke's Some Thoughts Concerning Education and Adam Smith's Theory of Moral Sentiments.

The evidence of social history also shows that liberal societies provide the intellectual freedom of thought that cultivates the life of the mind in philosophy and science.  Beiner seems deny this by agreeing with Heidegger that part of the shallowness of life in a modern liberal society is that people are distracted from plumbing the depths of the mysterious question of Being--why is there something rather than nothing?  Thus, people do not engage in the "heroic thinking" that constitutes true philosophy (70-91).  But, in fact, Heidegger's question of Being--of why or how something comes from nothing--has become a fundamental question for modern philosophy and science--particularly in response to the scientific theory of the Big Bang as the origin of everything from nothing.

Beiner is also silent about the evidence of political history that shows the spirited heroism of liberal societies.  He speaks about the emotional appeal of Hitler's heroism (130-31), but he says nothing about the liberal heroism of Winston Churchill in leading Great Britain to resist and finally defeat Hitler.

The history of liberalism is to a large extent the history of spirited resistance to tyranny and courage in war.  The Declaration of Independence was a declaration of Lockean liberalism that was also a declaration of war.  The American Civil War under the heroic leadership of Abraham Lincoln became a test of whether people in a liberal society were courageous enough to fight and die for the emancipation of slaves and a "new birth of freedom."

In Great Britain, John Stuart Mill saw Lincoln's leadership in the war as a vindication of the moral heroism of people in a free society.  In "The Contest in America" (1862), Mill wrote:
"I cannot join with those who cry Peace, peace.  I cannot wish that this war should not have been engaged in by the North . . . . War, in a good cause, is not the greatest evil which a nation can suffer.  War is an ugly thing, but not the ugliest of things: the decayed and degraded state of moral and patriotic feeling which thinks nothing worth a war, is worse.  When a people are used as mere human instruments for firing cannon or thrusting bayonets, in the service and for the selfish purposes of a master, such war degrades a people.  A war to protect other human beings against tyrannical injustice; a war to give victory to their own ideas of right and good, and which is their own war, carried on for an honest purpose by their free choice--is often the means of their regeneration.  A man who has nothing which he is willing to fight for, nothing which he cares more about than he does about his personal safety, is a miserable creature, who has no chance of being free, unless made and kept so by the exertions of better men than himself.  As long as justice and injustice have not terminated their ever renewing fight for ascendancy in the affairs of mankind, human beings must be willing, when need is, to do battle for the one against the other."
This doesn't sound like the degraded and meaningless life of the "last man."


Here are links to some of my posts that elaborate some of my points here:

Nietzsche's middle period:  hereherehereherehere, and here

Nazi philosophers:  here and here

The Alt-Right ethnic state:  here, and here,

Leo Strauss and Nazismhere and here

Patrick Dineen and the Amish:  here and here

Rod Dreher and the Benedict Optionhere

Steven Smith:  here and here

Deirdre McCloskey and the bourgeois virtues:  herehere, and here

Steven Pinker and liberal progress:  here and here

The Human Freedom Indexhere

Empirical Human Progress through the Liberal Enlightenmenthere

Heidegger's question of something from nothing:  here and here

Thursday, May 03, 2018

Nietzsche's Critique of Jordan Peterson's Nietzschean Religion






The first three YouTube videos here are short. The fourth is a compilation of videos that is longer--about 55 minutes.

Oh, I know, many of you think I have already written too much about Jordan Peterson. So you can skip this post. And I promise this will be my last one on Peterson.

These videos show Peterson presenting his interpretation of Friedrich Nietzsche's proclamation of the death of God as creating a problem for morality--particularly, the Western morality of natural rights or human rights as founded on the sacred dignity of all individuals.  Peterson claims that this shows that morality is impossible without a grounding in a transcendent religious metaphysics.  Even those who think they are scientific atheists--like Richard Dawkins and Sam Harris, for example--are actually acting out their implicit practical belief in Christian metaphysics, because they embrace a Christian morality of natural individual rights.  This shows that "we're running on the fumes of Christianity in the West."  Or to use another metaphor, we're living inside the corpse of a whale (the dead God), and there has been plenty for us to eat; but we don't realize that soon there will be nothing left for us to eat.

People like Dawkins and Harris think that their morality can be based on pure rationality--the rational science of the Enlightenment.  But in fact, as Dostoevsky shows in Crime and Punishment, there's nothing irrational about choosing to become a psychopathic murderer (like Raskolnikov): It's perfectly rational to choose to take whatever you want whenever you want it from others without any concern for their welfare, as long as you can escape punishment.  Dostoevsky is showing us that this is what we would do if we truly were atheists.

This is the Ring of Gyges argument in Plato's Republic: if I could make myself invisible, so that I could steal, cheat, and murder for my pleasure, without ever getting caught, why not?  As Dostoevsky declared: If God is dead, then everything is permitted.  This explains why Peterson thinks he has to appeal to a Nietzschean/Jungian religion--an atheistic religion--to solve the problem of morality collapsing into nihilism if there is no religiously grounded morality.

As indicated in his lectures and in Maps of Meaning (6-7), Peterson's two favorite passages from Nietzsche are from Twilight of the Idols (ix.5) and The Gay Science (sec. 125).  In the first passage, Nietzsche ridicules George Eliot and the English generally for thinking they can deny the existence of the Christian God while keeping Christian morality, without realizing that Christian morality must be a command of God--its origin is transcendental--and therefore the death of God must bring the death of Christian morality. 

The second passage is Nietzsche's first statement  of his famous declaration that "God is dead."  What is notable about this passage, as Peterson points out, is how Nietzsche laments this as a disaster for humanity: "What did we do when we unchained this earth from its sun? Whither is it moving now? Whither are we moving now? Away from all suns? Are we not plunging continuously? Backward, sideward, forward, in all directions?  Is there any up or down left?  Are we not straying as through an infinite nothing?  Do we not feel the breath of empty space? Has it not become colder?  Is not night and more night coming on all the while?  Must not lanterns be lit in the morning?  Do we not hear anything yet of the noise of the grave-diggers who are burying God?  Do we not smell anything yet of God's decomposition? Gods too decompose."  Peterson goes off on Nietzsche's suggestion that the death of God means that there is no longer any up or down--without God to command what is right or wrong, there are no standards of higher or lower for us.

But then even as Peterson agrees with Nietzsche's claim that human morality depends on transcendent standards--a moral cosmology--Peterson also says that his whole position is embedded in a Darwinian evolutionary science that would seem to view human morality as founded on empirical standards--a moral anthropology.  This contradiction in Peterson's reasoning actually coincides with a contradiction between the early and later writings of Nietzsche showing a longing for transcendence and religious redemption and the middle writings of Nietzsche showing a Darwinian science that explains morality as rooted in evolved human nature.  (I have written about this in a series of posts in January to April, 2013.)

That Peterson agrees with the middle Nietzsche in seeing morality as grounded on a Darwinian moral anthropology is clear in 12 Rules for Life.  Agreeing with my principle that the good is the desirable, Peterson writes:
"Think about it like this.  Start from the observation that we indeed desire things--even that we need them. That's human nature.  We share the experience of hunger, loneliness, thirst, sexual desire, aggression, fear, and pain.  Such things are elements of Being--primordial axiomatic elements of Being. But we must sort and organize these primordial desires, because  the world is a complex and obstinately real place.  We can't just get the one particular thing we especially just want now, along with everything else we usually want, because our desires can produce conflict with our other desires, as well as with other people, and with the world.  Thus, we must become conscious of our desires, and articulate them, and prioritize them, and arrange them into hierarchies.  That makes them sophisticated. That makes them work with each other, and with the desires of other people, and with the world.  It is in that manner that our desires elevate themselves.  It is in that manner that they organize themselves into values and become moral. Our values, our morality--they are indicators of our sophistication" (101-102).
Here Peterson seems to agree with me (and with Philippa Foot) that morality is a system of hypothetical imperatives that depend on human interests and desires.  Morality is informed desire.  The good is the desirable, and reason judges how best to satisfy the desires over a whole life, which often requires settling conflicts between desires by judging how one desire fits with others in some deliberate conception of a whole life well lived.  Hypothetical moral imperatives can be understood as following a given/if/then structure: Given what we know about our evolved human nature and our individual circumstances,  if we want to live a flourishing human life, then we must organize the satisfaction of our desires into a coherent plan of life, which requires the moral and intellectual virtues.

But then immediately after the passage just quoted, Peterson says that we need to move to a deeper level of morality to see how the "ultimate values" depend on religion, which is what Plato meant by the transcendent "Idea of the Good."  So all of our morality depends on our religious beliefs.  And if someone objects, "But I'm an atheist," Peterson will answer:
"No, you're not (and if you want to understand this, you could read Dostoevsky's Crime and Punishment, perhaps the greatest novel ever written, in which the main character, Raskolnikov, decides to take his atheism with true seriousness, commits what he has rationalized as a benevolent murder, and pays the price). You're simply not an atheist in your actions, and it is your actions that most accurately reflect your deepest beliefs--those that are implicit, embedded in your being, underneath your conscious apprehensions and articulable attitudes and surface-level self-knowledge. You can only find out what you actually believe (rather than what you think you believe) by watching how you act.  You simply don't know what you believe, before that. You are too complex to understand yourself" (103).
Although Peterson offers this as some profound insight, it's really quite ridiculous.  The only reason we don't commit murder is because we believe that God commands us not to murder.  So if we believed that God was dead, we would commit murder.  Therefore, if we don't commit murder, our actions show that we are not atheists.  But then, eventually, as modern atheism becomes such a deeply felt belief that it becomes expressed in our actions--once we have consumed God's corpse, and there's nothing more to eat--we should expect that we will all become murderers.

If this were true, we would expect to see empirical historical evidence that religious belief is correlated with a low homicide rate, and declining religious belief is correlated with a high homicide rate.  But as we've seen in many previous posts, there is a lot of evidence for declining violence over the past centuries, with some of the steepest declines in the less religious countries. 

In fact, even Peterson cites Steven Pinker's Better Angels of Our Nature as supporting this conclusion: "The probability that a modern person, in a functional democratic country, will now kill or be killed is infinitesimally low compared to what it was in previous societies (and still is, in the unorganized and anarchic parts of the world)" (58).  Oddly, Peterson does not notice how this contradicts his prediction that the modern death of God must necessarily turn us all into murderous Raskolnikovs.

It's surprising to me that in all the commentary on Peterson that I have read, no one has pointed out this fundamental contradiction in his arguments.

There is another aspect of this fundamental contradiction.  On the one hand, Peterson insists that the domain of science as the study of objective facts is completely separated from the domain of religion as mythic storytelling about subjective values (34-35, 188).  On the other hand, he accepts the "social brain" hypothesis of evolutionary psychology as explaining the evolution of religious belief as expressing the "hyperactive agency detection device" in our brains (38-40).  Peterson doesn't recognize that this evolutionary theory of religious belief was first proposed by Darwin in The Descent of Man and by Nietzsche in Human, All Too Human (as I have indicated in posts here and here).  Nor does he recognize the contradiction in asserting that science both can and cannot study religious belief.

I have elaborated my criticisms of the claim that the death of the Christian God means the death of morality herehere, and here.