Thursday, April 29, 2021

The Lockean Evolution of Nonviolent Revolution


Myanmar Protesters Recently Using the Three-Finger Salute from "The Hunger Games."  The Military Dictatorship Has Banned This Salute.


Over the past three months, we have seen in Burma (Myanmar) what could have happened in the United States if Donald Trump had overturned the presidential election of 2020 and declared his right to rule under martial law.  In November, Burma's National League for Democracy led by Daw Aung San Suu Kyi won by a landslide the elections for Parliament.  On February 1, only a few hours before the new Parliament was to meet, the Burmese military (the Tatmadaw)--under the commander in chief, General Min Aung Hiaing--announced that because the elections had been fraudulent, the Parliament would be abolished, and the military would take power over the country under a one-year declaration of emergency.  Aung San Suu Kyi and other members of Parliament were arrested.  Almost immediately, protesters marched across the country, launching a nonviolent resistance campaign to overthrow the military dictatorship and restore parliamentary democracy.

Many Americans have feared that something like this could have happened in the United States.  Before the election of 2020, Trump indicated that he might refuse to accept the outcome of the election if he lost.  After he lost the election, he insisted that the election had been rigged against him, and so he might refuse to leave office.  Some of Trump's supporters--including Michael Flynn, speaking after he was pardoned by Trump--said that Trump should declare martial law and suspend the Constitution so that he could rule as a military dictator.  On January 6, Trump ordered his supporters to march to the Capitol to stop the Senate from authorizing the stealing of the election, which led to insurrectionary violence in the Capitol Building.

Before the election, thousands of Americans had agreed to an elaborate plan of action for a nonviolent resistance movement to stop any such military coup by Trump.  This plan--"Hold the Line: A Guide to Defending Democracy"--was based on the theory and practice of nonviolent civil resistance to overthrow dictatorships, particularly as studied by scholars like Gene Sharp and Erica Chenoweth (Marantz 2020; Merriman et al. 2020).  

The fundamental idea behind this plan is Lockean--that governmental power depends on popular consent and that the people have the natural right to overthrow an oppressive government by withdrawing their consent.  This revolutionary overthrow of government can be either violent or nonviolent.  The proponents of nonviolent resistance argue that the nonviolent methods of revolution are more likely to succeed and cause less suffering than are the violent methods, and there is some historical evidence to confirm this.  

Over the past 120 years, this history shows a Darwinian cultural evolution of nonviolent revolution, in which there has been an evolutionary process of variation and selective retention, so that methods of nonviolence that have succeeded in one country are transmitted by imitation in other countries.  In this evolutionary diffusion of ideas, scholars like Sharp and Chenoweth have provided instruction through the collection and analysis of the historical data for comparing violent and nonviolent revolutions.  (I have written some posts on this process of Darwinian cultural group selection.)


SERBIA 2000: "HE IS FINISHED!"

One historical example of how nonviolent resistance can overthrow a dictator who is trying to overturn an election is Serbia in 2000.  Slobodan Milosevic had been President of Serbia since 1989, whose authoritarian power was based on electoral fraud, suppressing freedom of the press, police brutality against his opponents, and political assassinations.  University students had formed an organization named Otpor (Serbian for "resistance") in 1998 devoted to overthrowing Milosevic through nonviolent resistance.  Their resistance movement was guided by the writings of Gene Sharp--particularly his book From Dictatorship to Democracy (2012)--which presented 198 methods of nonviolent revolutionary struggle  to overturn a dictatorship and establish a democracy in its place.  Sharp's book was translated into Serbian, and thousands of copies were distributed to political activists around Serbia (Arrow 2020, 176-92).  This book was an abbreviated version of his long magnum opus--The Politics of Nonviolent Action (1973), which originally was his dissertation at Oxford University.  (I have written some previous posts on nonviolent resistance here and here.)

Sharp taught them that every dictatorship--like every government--depends on "pillars of support" or key institutions that sustain the power of the dictatorship.  They might include the police, the military, governmental bureaucrats, businessmen, and religious organizations.  If enough people in those groups can be persuaded to defect--to withdraw their support--the dictatorship will collapse.  This could be explained by the idea of "minimal winning coalitions": no ruler can rule alone, because even an absolute dictator needs the loyal support of a coalition of powerful people, and when those people withdraw their consent to the government, the rulers become powerless.  (I have written about this in another post.)

Sharp thought that this dependence of rulers on popular consent was recognized by Niccolo Machiavelli when he said that the prince "who has the public as a whole for his enemy can never make himself secure; and the greater his cruelty, the weaker does his regime become" (The Discourses, I.16.5).  Sharp has been called "the Machiavelli of nonviolence," because of his Machiavellian toughness in explaining the strategy and tactics of nonviolent resistance as a form of revolutionary warfare.

In his public lectures, Sharp would often employ aggressive language.  He would begin by saying: "My name is Gene Sharp, and we're here today to discuss how to seize political power and deny it to others.  I say nonviolent struggle is armed struggle!  And we have to take back that term from those advocates of violence who try to justify with pretty words that kind of combat.  With this kind of struggle, one fights with psychological weapons, social weapons, economic weapons, and political weapons.  This is ultimately more powerful against oppression, injustice, and tyranny than is violence."

                                                A Clenched Fist Was the Symbol of Otpor

The Otpor leadership identified the police and the army as the most important pillars of support for Milosevic, and they began infiltrating these organizations through personal contacts.  Otpor members wrote letters to every police station in Serbia.  The letters warned them that they should think about their life after the end of the Milosevic regime.  They also said: "You are our friend.  Your kids are in Otpor.  If you hurt them, your fellow citizens will shun you."

Otpor organized massive protest marches across Serbia.  To keep the marches nonviolent, they had trained people as marshals who would direct the protests and isolate any individuals who became violent.  They anticipated that Milosevic's security forces would plant provocateurs in their marches to initiate violence, so that Milosevic could then identify Otpor as a terrorist organization.  The protesters would lose public approval, and the police would support the regime.

Over the summer of 2000, many members of Otpor were arrested by the police, beaten, and imprisoned.

An election was coming on September 24, 2000.  The opposition parties agreed to run a single candidate--Vojislav Kostunica--to maximize their chance of defeating Milosevic.

Anticipating that Milosevic would try to steal the election, the opposition organized an elaborate exit polling system for counting the votes.  When they reported that Kostunica had won the election, Milosevic's party announced that no party had won a majority, and so there would have to be a run-off election.  The opposition declared that this was a lie, and that the people should prepare for a general strike and mass protests against Milosevic.  Soon the entire country was shut down by the strike.

On the morning of October 5th, convoys of people from around Serbia began to move towards Belgrade, the capital, where they were to meet at 3 p.m. outside the parliament building.  The plan was to take control of the building.  Milosevic deployed his police around the edges of Belgrade to enforce roadblocks.  But when the convoys reached the roadblocks, the police stood aside, and the convoys moved through.  When the convoys reached the square outside the parliament building, the people were singing and chanting, "Gotov Je! Gotov Je!" ("He's finished!").

The police and the army were ordered to clear the square and to fire into the crowd of protesters.  But that order was not carried out.  Later, it was reported that those in command refused the order to fire because they knew members of their own families were in the crowd.

But at this point, by the evening of October 5th, the disciplined nonviolent movement began to lose control of the crowd.  The parliament building was on fire, and word got out that some people were breaking into Milosevic's Socialist Party Building, planning to burn it down.  When the leaders at the Otpor office heard about this, they gathered volunteers and rushed to the building to drag the provocateurs out.  They surrounded the building to protect it and to prevent any outbreak of violence.

Late that night of October 5, Milosevic announced he was resigning.  All across Serbia, and even elsewhere around the world, crowds of people celebrated.

One of the leaders of Otpor--Srdja Popovic--has said that one of the most important lessons taught to them by Gene Sharp was the need to maintain nonviolence, because if protestors had turned to violence, this would have provided an excuse for the military to launch a coup to restore order, and thus a new dictatorship would have taken control.

The other important lesson from Sharp illustrated in this successful nonviolent movement in Serbia was that mass protest marches are not enough to succeed.  Watching the CNN coverage of the crowds around the Parliament Building, it was easy to conclude that such a movement needs only to attract mobs of people to flood the streets, and then the government collapses over night.

That's not true, because Otpor had been training their people and organizing their movement for years, and mass public protests were only one of many methods they used.  That's the point of Sharp's 198 methods of nonviolence--that to succeed, nonviolence must flexibly employ many techniques over a long time.  

The failure of the Tiananmen Square protests in China in 1989 illustrates this point.  The Chinese students in this movement had not read Sharp's books.  Nor had they studied any of the history of nonviolent resistance.  They were not organized or trained.  They relied on only one method--occupy the Square in Beijing.  It was easy then for the Chinese military to end the movement by sweeping into the Square and firing in the crowds.

By 2000 in Serbia, the cultural evolution of nonviolence had spread knowledge of nonviolent methods around the world; and the leaders of Otpor had deliberately studied Sharp's books to learn how best to organize their movement.  In subsequent years, those who had led Otpor could spread the knowledge of their practical experience and transmit Sharp's books to other countries engaged in nonviolent struggles to overthrow authoritarian regimes--such as the "Rose Revolution" in Georgia (a former Soviet state) and the "Orange Revolution" in the Ukraine.



"LETTER TO A SLEEPLESS SYRIAN"

In 2011, the revolutionary uprisings of what came to be called the "Arab Spring" began in January and February.  Mass public protests in Tunisia forced President Zine Abdine Ben Ali to flee the country after 23 years in power.  Similar protests in Egypt forced President Hosni Mubarak to resign after 30 years in power.  In Libya, a revolution began the overthrow of Muammar Gaddafi, who had ruled for 34 years.

It seemed that Syria might follow a similar revolutionary path in March when a civil uprising began to challenge the rule of Bashar al-Assad, who had governed as a dictatorial president for 11 years, preceded by 30 years of rule by his father Hafez al-Assad.  This uprising began as a nonviolent resistance movement led by some Syrians who were following Gene Sharp's teaching.  But by the middle of the summer, the opponents of the regime turned to armed warfare, and from that point to the present, Syria has been in a civil war.  Today, nearing the 10th anniversary of the beginning of the civil war, large parts of Syria are under the control of rebel factions or foreign powers, but al-Assad seems to have largely won the war.

Authoritarian dictators can point to this example of Syria as showing the foolishness of nonviolent resistance to tyranny, because it soon becomes violent, which leads to a destructive civil war that is worse than any tyrannical government.  But, actually, Sharp himself warned his Syrian followers to remain nonviolent, and he accurately predicted that if the Syrian protesters became violent, this would provoke a civil war that they were unlikely to win (Arrow 2020, 267-286).

Some of the Syrians who were leading the revolution in Syria early in 2011--such as Ausama Monajed and Mohammed Alaa Ghanem--had studied Sharp's books, and they developed a strategy for nonviolent resistance derived from his teaching.  They launched a campaign to attack Assad's "pillars of support" by selecting slogans that challenged Assad's legitimacy to rule, by persuading those in the military to defect, and by calling on Syrians to boycott the products of those in the business community supporting Assad.  All of this was directed to demanding Assad's resignation and the establishment of a truly democratic regime.  In April, in an interview in London, Ausama Monajed said: "Gene Sharp's tactics and theories are being practiced on the streets of Syria as we speak now."

By late April, it appeared that the nonviolent movement against Assad was winning.  Assad ordered his soldiers to arrest, torture, and kill the protesters.  But the atrocities committed against civilians was provoking popular disgust with the regime.  Soldiers were starting to defect because they did not want to shoot into crowds of people that might include members of their own families.

Far from stopping the atrocities, however, Assad increased them.  He had learned from his father the lesson that it was better to be feared than loved--that princes can secure their power against opponents through "cruelty well-used."  The most infamous example of this from his father, Hafez al-Assad, was the massacre in 1982 in the town of Hama.  For 6 years, the Muslim Brotherhood had led an Islamist violent insurgency against al-Assad, which included attempts to assassinate him.  Then, in response to an uprising in Hama, al-Assad ordered that over 12,000 troops surround the town and then destroy it.  The fight continued for over three weeks.  The town was virtually leveled.  The rebels were killed.  And thousands of civilians were massacred, perhaps as many as 20,000 to 40,000.  It has been described as the deadliest attack by an Arab government against its own people.  This ended the Islamist rebellion against al-Assad and secured his rule and the rule of his son for another 30 years.

Imitating his father's cruelty, Bashar al-Assad ordered his soldiers to shoot the protesters, and those who refused the orders or who defected were shot by snipers placed behind the regular soldiers.  He recruited violent criminals and others to move through the country without uniforms to murder and rape the protesters.  They posed as Alawite Muslims killing Sunni Muslims, or as Sunnis killing Alawites.  He wanted to sow sectarian hatred.  He also wanted to provoke the nonviolent protesters into becoming armed insurgents, because then he could identify them as terrorists, and his soldiers would be motivated to fight against them rather than defecting.

He succeeded.  On July 29, 2011, seven officers who had left the Syrian Armed Forces to join the resistance announced the formation of the Free Syrian Army.  Later, other armies and militias to fight against Assad were formed.  These rebel soldiers said that they would protect the nonviolent protesters from the violence of Assad's troops.

When Gene Sharp was told about this, he warned that this was a mistake that would destroy the nonviolent movement and favor the triumph of Assad, because he would always have military superiority.  He was asked about this in an email from Mohammed Ghanem, who signed off as "The Sleepless Syrian."  Sharp wrote back in a letter called "Letter to a Sleepless Syrian" (Arrow 2020, 283-84):
"The present unease of the troops in obeying orders is clear.  Otherwise he regime would have no need to order immediate killing of disobeying soldiers.  That is strong evidence that the reliability of the army is very shaky and possibly on the verge of collapse.  That means that wise resistance actions are crucial to take the army away from the regime. . . . The regime clearly is desperate.  They may intend for their brutalities to enrage protesters so much that they resort to violence.  The protesters should not be tricked into that.  Significant protest violence would guarantee the army's loyalty and the defeat of the revolution."

Sharp warned against the offer of armed protection for the protesters from the army defectors:

"This supposed major help can dramatically change the conflict so that the regime's still overwhelming superior military capacity will shape the course of the continuing conflict.  Then, without major external military intervention, the dictatorship is likely to triumph.  Inevitable major casualties are likely to vastly exceed even exceptionally high casualties in a nonviolent struggle conflict.  The nonviolent resisters are likely to become irrelevant to shaping the future of their country."

So here, as he had often done, Sharp claimed that nonviolent revolutionary movements are better than violent revolutionary movement in at least two respects:  nonviolent movements have a lower death rate and a higher success rate than violent movements.  In his writings, he cited many historical cases to support this claim (Sharp 1973).  But he never did a systematic quantitative analysis of the historical data that would conclusively demonstrate this. 


COMPARING VIOLENT AND NONVIOLENT REVOLUTIONS

In recent years, Erica Chenoweth and her colleagues have collected the historical data for over 627 nonviolent and violent revolutionary campaigns from 1900 to 2019, and they done a quantitative analysis of that data to compare the outcomes (success or failure) of these movements.  They have found that Sharp is right about the superiority of nonviolent movements over violent movements (Chenoweth 2021).  (I have written about Chenoweth's research here and here.)

If we define the success of a revolutionary campaign as the overthrow of a government or territorial independence achieved because of the campaign, then over 50% of the nonviolent revolutions from 1900 to 2019 have succeeded, while only about 26% of the violent revolutions have succeeded.  So nonviolent revolutions do not always succeed.  They do not even succeed most of the time.  But at least they succeed about half of the time, which is to say they succeed as often as they fail.  Or if you see that the glass is half empty, they fail as often as they succeed. That makes nonviolent revolutions almost twice as successful as violent revolutions.

Nonviolent resistance is risky, in that governments can respond to a nonviolent resistance movement by killing unarmed civilians.  But nonviolent resistance is less risky than violent resistance.  If "mass killings" are defined as state violence in which at least one thousand unarmed civilians are killed, about 23% of nonviolent revolutions have suffered mass killings.  By comparison, about 70% of violent revolutions have had mass killings, in which governments kill civilians suspected of supporting the armed insurgents.

We might say that nonviolent resistance is more risky than obedience.  But even obedience is risky if it means obeying a brutal government.  Chenoweth has found that the strongest predictor of nonviolent revolution is a government's violations of human rights: when a government is arbitrarily imprisoning, torturing, and killing people, then people can decide that they have no choice but to rebel.  This was Locke's point: people will suffer a bad government while its evils are sufferable, but they are inclined to revolt when governmental cruelty becomes unbearable.

We might wonder, however, whether it is reasonable to compare violent and nonviolent movements given the fact that nonviolent resistance is often combined with some violence.  And we might suspect that the success of nonviolent resistance movements often depends on the movement becoming violent or at least threatening violence.

How we answer this question will depend on how we define violence.  Chenoweth defines it as "an action or practice that physically harms or threatens to physically harm another person" (Chenoweth 2021, 145).  Even when a resistance movement is predominantly nonviolent, it can have "violent flanks"--some people in the movement who use violence along with the mostly nonviolent campaign.  These violent flanks can be either armed (people taking up arms such as guns) or unarmed  (people fighting in the streets or throwing rocks and other projectiles or destroying property).

In her database of nonviolent revolutionary campaigns, Chenoweth found that over 60% had no armed factions.  But she also found that over 80% of the nonviolent revolutionary movements had some unarmed violence such as street fighting and destroying property.

She noted, however, that authoritarian rulers threatened by nonviolent resistance want to provoke nonviolent protesters into becoming violent, because this allows the rulers to justify the violent repression of the resisters as the proper punishment of violent criminals.  When nonviolent protests become violent, this reduces public support for the protest movement, while at the same time the police and the military are less likely to defect.  Her historical data confirm this: while 65% of the nonviolent revolutions that had no fringe violence were successful, only 35% of the nonviolent revolutions that had some fringe violence were successful.

So why then do nonviolent resistance movements so often turn to violence?  The answer from Sharp and Chenoweth is that when nonviolent resisters rely exclusively on concentrated street protests, they expose themselves to violent attacks from the government; and this will often provoke many of them to defend themselves with violence.  Their mistake is in failing to see that there are many methods of nonviolent resistance that are less risky than visible street protests--such as stay-at-home strikes, boycotts, and other forms of noncooperation--that are effective in weakening a dictatorship.

So how does this apply to the current situation in Burma?  Can the nonviolent resistance against the military dictatorship remain nonviolent?  Or will the movement be provoked by the government into violence?  I will turn to those questions in my next post.


REFERENCES

Arrow, Ruaridh. 2020. Gene Sharp: How to Start a Revolution. London: Big Indy Books.

Chenoweth, Erica. 2021. Civil Resistance: What Everyone Needs to Know. New York: Oxford University Press.

Marantz, Andrew. 2020. "How to Stop a Power Grab." The New Yorker. November 16.

Merriman, Hardy, Ankur Asthana, Marium Navid, and Kifah Shah. 2020. Hold the Line: A Guide to Defending Democracy.  Washington, DC: International Center on Nonviolent Conflict.

Sharp, Gene. 1973. The Politics of Nonviolent Action. Boston: Porter Sargent Publishers.

Sharp, Gene. 2012. From Dictatorship to Democracy: A Conceptual Framework for Liberation. New York: The New Press.

Saturday, April 24, 2021

Conservative Confusion About Constitutional Originalism

Until recently, the conservatives who supported Donald Trump agreed on one main point in their conservative case for Trump:  Trump's appointment of federal judges recommended by the Federalist Society would promote the conservative legal movement by appointing only judges who adhere to the original meaning of the Constitution and the clear textual meaning of the laws.  That 28% of those actively serving on the federal bench today--including three of the nine Supreme Court Justices--were appointed by Trump must therefore be a great success for conservative jurisprudence.  But now Trumpian conservatives cannot agree about this.

The problem began in June of last year when Neil Gorsuch wrote the majority opinion (six justices) in Bostock v. Clayton County, holding that the Civil Rights Act of 1964 forbids employment discrimination against homosexuals and transgender people, because Title VII of that Act makes it unlawful "to discriminate against any individual . . . because of . . . sex."  Conservatives were shocked that Trump's first Supreme Court appointee, who filled the vacancy created by Antonin Scalia's death, would write such an opinion.  They were particularly shocked that Gorsuch justified his opinion as grounded in a strict Scalian textualist interpretation of the law: according to the literal meaning of the text of the law, discriminating against individuals because they are homosexual or transgender discriminates against them because of their sex, which therefore violates Title VII.  

Some religious conservatives (like Senator Josh Hawley) declared that this showed that their bargain with Trump and the conservative legal movement had failed, because a textualist originalism was supporting liberal policies.  (I have written previously about Gorsuch's opinion in Bostock.)


BEYOND ORIGINALISM?

Some conservatives (like Adrian Vermeule) say that this shows the failure of originalist jurisprudence to support conservative morality.  Instead of originalism, Vermeule argues, conservatives should embrace a "common good constitutionalism" or "substantive moral constitutionalism."  "This approach should take as its starting point substantive moral principles that conduce to the common good," Vermeule explains, "principles that officials (including, but by no means limited to, judges) should read into the majestic generalities and ambiguities of the written Constitution."  This would mean a recognition of the fact that those who have social authority--including judges--must "legislate morality."

Vermeule acknowledges that in making this argument he is following the lead of Ronald Dworkin, who was well known for claiming that we need "moral readings of the Constitution"--that interpreting the Constitution is an exercise in moral philosophizing.  But while common-good constitutionalism is "methodologically Dworkinian," Vermeule observes, it "advocates a very different set of substantive moral commitments and priorities from Dworkin's, which were of a conventionally left-liberal bent."  But some conservatives will object: if we are going to allow conservative judges to develop conservative "moral readings of the Constitution," how can conservatives deny the freedom of liberal judges to read the Constitution as teaching a liberal morality?


A BETTER ORIGINALISM?

This objection has led some conservatives to contend that while we should agree with Dworkin in finding moral philosophy in the Constitution, we should insist that this be the original moral philosophy of the Founders who framed and ratified the Constitution, with the assumption that this original moral philosophy of the Founders was a conservative moral philosophy.  This seems to be the position of Hadley Arkes, Josh Hammer, Matthew Peterson, and Garrett Snedeker in their essay "A Better Originalism," who argue for a "common good originalism."

They recognize that textualist originalism can correctly interpret the literal meaning of the legal texts, as Gorsuch did in Bostock.  But this mistakenly ignores the moral meaning of the laws by assuming a narrow positivist jurisprudence, according to which the validity of the law as whatever the lawmaker says it is has nothing to do with its morality.

Against such positivist originalism, they affirm four principles of an originalism based on moral and natural law:

1. We hold that moral truth is inseparable from legal interpretation.

2. We hold that the Anglo-American legal order is inherently oriented toward human flourishing, justice, and the common good.

3. We reject literalist legal interpretation and hold to the common sense jurisprudence of the founders.

4. We believe in a jurisprudence that is, in the truest and most profound sense of the term, conservative, in preserving the moral ground of a classic jurisprudence.

They believe that these principles are implicitly affirmed in the Declaration of Independence and in the Preamble of the Constitution.  The Declaration appealed to "the Laws of Nature and of Nature's God."  That provided a moral ground for a shared commitment to natural rights secured by government, so that the "just powers" of government are directed to the "Safety and Happiness" of the people, for whom law is "wholesome and necessary for the public good."

The Preamble enumerated the substantive ends of the Constitution: "a more perfect Union," "Justice," "domestic Tranquility," "the common defence," "the general Welfare," and "the Blessings of Liberty."  Consequently, a constitutional originalist must interpret the clauses of the Constitution as directed to these substantive moral ends as "the telos, or purpose, for which those clauses have been formed."

They recognize the most common conservative objection to their position, mentioned above, that affirming the moral meaning of the Constitution will allow liberal judges to find liberal morality in the Constitution.  They respond: "If our friends claim that judges on the Left will take this as a new license for moral reasoning untethered, our answer is: why do we suppose that we cannot tell the difference between arguments that are plausible or specious?  The answer to the Left is to show why their reasoning is false; it is not to end all moral reasoning and disarm conservative judges."  

So like Vermeule, their constitutional jurisprudence is "methodologically Dworkinian," in looking for a moral reading of the Constitution, but they are confident that correct moral reasoning will persuasively support a conservative morality for the Constitution and refute the left-liberal morality that Dworkin claimed to find in the Constitution.

Arkes and his colleagues seem to be speaking for the Claremont Institute in their critique of positivist originalism.  After all, their arguments sound a lot like those made by Harry Jaffa against Rehnquist, Bork, and Scalia.  

But that creates a problem for the Claremont Institute's support for Trump.  If Trump's judicial appointees belong to the positivist jurisprudence championed by the Progressives (such as Oliver Wendell Holmes), doesn't this mean that Trump has promoted the overturning of the Founders' Constitution?  Doesn't this contradict the Claremont Institute's claim that it supports the natural rights/natural law tradition of the Founding against the positivism of the Progressives?  (That Scalia's positivist jurisprudence was rooted in the progressive legal tradition of Holmes has been well argued by George Anastaplo.)


DEFENDING POSITIVIST ORIGINALISM

The best conservative critique of this conservative moral originalism has been written by John Grove for the Liberty Fund's Law & Liberty website.  Josh Hammer--one of the coauthors of "A Better Originalism"--has identified Grove's essay as "the most thorough response" to their argument.

Grove makes two general claims.  First, the text of the Constitution enumerates and organizes the powers of government, but it "neither answers nor empowers judges to answer the great moral questions of public life for us."  Second, Burkean conservatives should know that it would be dangerous to allow judges to act as moral arbiters, because to allow powerful people to impose on society their personal conceptions of morality would lead to tyranny.

While Arkes and his colleagues assert that "the Constitution's preamble enumerates substantive ends," Grove quotes from the Constitutional Convention's Committee of Detail's Report the statement that the Preamble was "not for the purpose of designating the ends of government and human polities."  Moreover, Grove notes that this was no recorded debate on the language of the Preamble--"a silence that would be shocking if any of the delegates thought the passage infused a great moral telos into the document."  Hammer admits: "Grove scores some clever points in his favor, especially with respect to the Constitutional Convention's Committee on Style, which drafted the Preamble without leaving behind any notes or recorded debate."

Nevertheless, Hammer insists that "Grove outright misses the mark," when he says that the Constitution does not "authorize the importation by judges of moral content . . . on which the Constitution is not indeterminate but utterly silent."  Hammer says that this begs the question at issue, which is "whether the Constitution is actually silent on the matter if it is properly understood and construed."  But isn't this a remarkably weak argument--to say that if the Constitution "is properly understood and construed," we can read moral language into the Constitution, although the Constitution never actually uses moral language?


THE ORIGINAL MEANING OF THE CONSTITUTIONAL AMENDMENTS

It is surprising that while both sides in this debate refer to the Constitution as ratified in 1789, both sides are silent about the Constitution of 1791 (with the ratification of the first ten amendments) and the Constitution of 1870 (with the ratification of the13th, 14th, and 15th amendments).  And it's in those amendments that one sees the moral language of rights, including those rights "retained by the people" prior to government.

The Constitution of 1789 uses the word "right" only once, in enumerating the power of Congress to secure "for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries" (Art. I, section 8, clause 8).  In the amended Constitution, the word "right" appears 12 times in the amendments.

To show that Alexander Hamilton recognized the "limited purpose" of the Constitution that did not require moral principles, Grove quotes Hamilton's observation that the long bills of rights often included in the state constitutions "would sound much better in a treatise of ethics than in a constitution of government."  This remark is from Federalist number 84, in which Hamilton defended the Constitution of 1789 against the objection that it had no bill of rights.  But Grove says nothing about the decision of James Madison and other founders in 1789 to amend the Constitution by adding a bill of rights.  Did this make the Constitution sound like a "treatise of ethics"?

In his speech to the House of Representatives explaining his proposed amendments, on June 8, 1789,  James Madison argued that although the Constitution had been ratified without a bill of rights, it would be good to satisfy that great body of the people that wanted such a bill of rights in the Constitution.  "We ought not to disregard their inclination, but, on principles of amity and moderation, conform to their wishes, and expressly declare the great rights of mankind secured under this constitution."  He indicated that the proposed list of rights included both "natural rights" such as freedom of speech and "positive rights" such as trial by jury.

"If they are incorporated into the constitution," he explained, "independent tribunals of justice will consider themselves in a peculiar manner the guardians of those rights; they will be an impenetrable bulwark against every assumption of power in the legislative or executive; they will be naturally led to resist every encroachment upon rights expressly stipulated for in the constitution by the declaration of rights."

He thought that such a bill of rights would guide judges in protecting the rights of the minority from being violated by a powerful majority:

"In a Government modified like this of the United States, the great danger lies rather in the abuse of the community than in the legislative body.  The prescriptions in favor of liberty ought to be levelled against that quarter where the greatest danger lies, namely, that which possesses the highest prerogative of power.  But this is not found in either the executive or legislative departments of Government, but in the body of the people, operating by the majority against the minority."

That this list of rights included natural rights that existed prior to government was indicated by the 9th Amendment: "The enumeration in the Constitution of certain rights shall not be construed to deny or disparage others retained by the people."

That this idea of rights "retained by the people" pointed to natural rights in the state of nature before the establishment of government was clear in Roger Sherman's draft of the Bill of Rights: "The people have certain natural rights which are retained by them when they enter into Society."

So what would Grove say about the 9th Amendment?  Since he quotes approvingly from Robert Bork, I wonder whether he would agree with what Bork said in his 1987 testimony before the Senate Judiciary Committee that was considering his nomination to the Supreme Court by President Reagan:

"I do not think you can use the Ninth Amendment unless you know something of what it means.  For example, if you had an amendment that says 'Congress shall make no' and then there is an inkblot, and you cannot read the rest of it, and that is the only copy you have, I do not think the court can make up what might be under the inkblot."

Would Grove agree that the 9th Amendment is a meaningless inkblot that should be ignored? 

I also wonder how Grove would read Section 1 of the 14th Amendment: "No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws."

This language was originally drafted by Congressman John Bingham from Ohio.  In his final speech in support of the 14th Amendment in May of 1866, he said that its purpose was to provide a check against the "many instances of State injustice and oppression" and "to protect by national law the privileges or immunities of all the citizens of the Republic and the inborn rights of every person within its jurisdiction whenever the same shall be abridged or denied by the unconstitutional acts of any State."

This is the moral language of Lockean classical liberalism--affirming natural human rights that must be secured by any just government.  That neither Arkes and his colleagues nor Grove acknowledge this language in the Constitution's amendments suggests their agreement in their rejection of this Lockean constitutional morality.


GORSUCH'S ORIGINALISM IN BOSTOCK

Returning to Gorsuch's opinion in the Bostock case, which prompted this whole debate over conservative originalist jurisprudence, I have to wonder whether a Scalian originalist like Grove would agree with Gorsuch's reasoning.  Gorsuch's credentials as a Scalian textualist are unimpeachable.  And in this case, Gorsuch relies on the strict meaning of the legal text in Title VII of the Civil Rights Act of 1964 to conclude that discrimination against homosexuals or transgender people is illegal because it is discrimination against them because of their sex.  Would Grove say that this is good positivist originalism, because it adheres to the literal meaning of the law as written by the lawmakers, without invoking Gorsuch's personal beliefs in any "moral truth" or "natural law" behind or above the legal text?

Perhaps Grove would agree with Justice Alito that Gorsuch's textualism is fraudulent.  In his dissenting opinion in the Bostock decision, Alito warns that Gorsuch's opinion is legislation disguised as a judicial opinion interpreting a statute, and therefore it is not really anything like Scalia's textualism.

"The Court attempts to pass off its decision as the inevitable product of the textualist school of statutory interpretation championed by our late colleague Justice Scalia, but no one should be fooled.  The Court's opinion is like a pirate ship.  It sails under a textualist flag, but what it actually represents is a theory of statutory interpretation that Justice Scalia excoriated--the theory that courts should 'update' old statutes so that they better reflect the current values of society." 

Gorsuch emphatically asserts that he is following Scalia's textualist approach in grounding his legal interpretation in the text of the law and nothing else.  "When the express terms of a statute give us one answer and extratextual considerations suggest another, it's no contest.  Only the written word is the law, and all persons are entitled to its benefit."

Remarkably, Justice Kavanaugh concedes that "as a very literal matter," Justice Gorsuch's interpretation of the legal text is correct! But Kavanaugh argues that as a sound principle of judicial interpretation, the "ordinary meaning" of a legal phrase is to be preferred over the "literal" meaning.  So while Gorsuch is right that the literal meaning of discrimination because of sex includes discrimination against homosexuals and transgender people, the ordinary meaning accepted by reasonable people--legislators, judges, and citizens--is that prohibiting discrimination because of sex means equal treatment for men and women, which does not require prohibiting discrimination based on sexual orientation or gender identity.  Kavanaugh says that Scalia would not have accepted Gorsuch's literal interpretation of "because of sex."  After all, Scalia himself once said that "the good textualist is not a literalist."  

And yet Gorsuch relies heavily on a crucial interpretation of Title VII by Scalia in his opinion, writing for a unanimous court, in Oncale v. Sundowner Offshore Services (1998).  Joseph Oncale alleged that he was forced to quit his job working on an oil platform in the Gulf of Mexica because he had been sexually harassed by other men in the crew.  Oncale said that he would not have been harassed by these men if he had been a female, and therefore he had been discriminated against because of his sex, which was prohibited by the language of Title VII.

When the Congress wrote and approved Title VII in 1964, probably no one anticipated that the language of the text would prohibit the sexual harassment of men by other men in the workplace.  This was not part of Congress's original intent.  But still, Scalia concluded, if the literal interpretation of Title VII protected men from such harassment, then this was part of the original meaning of the statute, even if that meaning was not understood in 1964.

The same literal reading of "because of sex" applied here, Gorsuch observes, also applies to employment discrimination based on the homosexuality or transgender status of employees.

This is what Walter Olson at the Cato Institute calls a "surprise plain meaning" reading of the law.  Sometimes a strict textualist reading of the law can turn up a meaning that surprises the jurisprudential textualists, a meaning that might even contradict the conservative policy preferences of the textualists.

Isn't that good for textualism, because it refutes the claim of textualism's critics that textualists use the supposed objectivity of textualism to read their own conservative ideology into the text of the law?  If textualism reveals in an unbiased way what the law really says, then one should expect that what is found in the law will sometimes surprise or even disappoint the conservative textualists.

That is the case with Gorsuch's reading of Title VII as expanding LGBT rights, against the desire of religious conservatives that those rights should be narrowed.  Similarly, as I have argued in a previous post, one can make a good textualist argument for concluding that the original meaning of the 14th Amendment supports same-sex marriage, which would provide a textualist justification for Justice Kennedy's opinion in Obergefell.  That argument has been well made by William Eskridge and Steven Calabresi in their amici curiae brief in the Obergefell case.  Eskridge and Andrew Koppelman made a similar textualist argument supporting Gorsuch's opinion in Bostock in their amici curiae brief in that case.

Now a positivist originalist like Grove might object that a decision like Bostock allows the Supreme Court to usurp the lawmaking powers of Congress.  But we should keep in mind that the Congress has the power to overturn or revise the Supreme Court's interpretation of the law in Bostock by legislating a congressional interpretation of Title VII.  Congress has all the constitutional powers to be the supreme branch of the national government, even if it often refuses to exercise those powers.

Wednesday, April 21, 2021

The Debate Over the Claremont Institute's Supporting Trump

I have written a series of posts on the history of the Claremont Institute's support for Donald Trump.  The most recent issue of the Claremont Review of Books has three essays repudiating Trump, which prompted my question: What took them so long?

A few weeks ago, the Liberty Fund's Law & Liberty website published an essay by Shep Melnick reviewing Charles Kesler's new book--Crisis of the Two Constitutions: The Rise, Decline, and Recovery of American Greatness.  Melnick argued that Kesler's book shows the "political worldview" that motivated the apocalyptic rhetoric of Michael Anton's essay in 2016 urging conservatives to vote for Trump: "2016 is the Flight 93 election: charge the cockpit or you die. . . . a Hillary Clinton presidency is Russian Roulette with a semi-auto.  With Trump, at least you can spin the cylinder and take your chances."  Writing for the American Greatness website, Glenn Ellmers has defended Kesler against Melnick's criticisms.

According to Kesler, America is deeply divided by the "crisis of the two constitutions," in which conservative Republicans want to recover the original Constitution of the American Founders that is based on Nature, while liberal Democrats reject that Constitution in favor of the "living Constitution" of the American Progressives that is based on History.  Melnick says that this Manichean view of America divided into two warring camps is both mistaken in its account of American history and dangerous in its incendiary rhetoric.  Ellmers says that this dark view of American moral and political polarization is correct, because the American Left really does hate the America defined by the principles of the Founding, and they will destroy it if they are not stopped by political leaders like Trump, who want to recover American greatness.

Melnick's critique of Kesler rests on three claims about Kesler's position:  "he ignores serious flaws in the American regime, exaggerates the influence of progressive historicism, and constructs a narrative that encourages anti-constitutional extremism."  It would be instructive if Ellmers were to refute these three claims; but as far as I can see, he has not.

Melnick's first claim is that by identifying the America of the Founders as "nothing less than 'the best regime of Western civilization,'" Kesler deflects attention from those defects in the original American regime that needed to be corrected.  So, for example, the Civil War amendments--the 13th, 14th, and 15th amendments--were necessary to affirm those original principles of natural equality and liberty that had been denied by the practices of slavery and racial discrimination.

Presumably, Kesler would say that this shows that America has always been the "best regime" in its natural principles, although its historical practices have only gradually and imperfectly approximated those principles.  But if that's so, doesn't that mean that rather than choosing between Nature and History as foundational, we can see History as a progressive realization of Nature's standards?

That points to Melnick's second claim--that Kesler's simple story of the Founder's appeal to Nature being displaced by the Progressives appeal to History is false, because American political thought has always shown appeals to both Nature and History.  Kesler relies on James Ceaser's argument in his Nature and History in American Political Development (2006).  But as Melnick points out, Ceaser recognizes that American political thinkers have grounded their arguments in both Nature and History since colonial times.  The Puritans invoked the Sacred History of the Bible.  By the middle of the 18th century, many Americans invoked the Customary History of England, which showed how the colonists' rights as Englishmen derived from the ancestral history of the British Constitution.  Or they invoked a Whig Customary History that reached beyond the British Constitution to the older constitution that was thought to have governed the Saxons before the Norman invasion.

Beginning with the Declaration of Independence and the Founding, Ceaser observes, there was a turn toward Nature--the natural rights of man.  But then in the period of the second party system (1830-1850), there was a shift towards a synthesis of History and Nature, in which "the Whig Party relied on the Historical School while the Democratic Party adopted a version of Philosophy of History" (Ceaser, p. 33).

According to Ceaser, Abraham Lincoln went through three phases of thinking in which he appealed to both nature and history.  "He began as a Whig, affirming a general Whig synthesis of nature and tradition while speaking infrequently of the concept of nature.  In his second phase, as a Republican in the 1850s, the center of his thinking flowed from restating the concept of nature.  In a third and final phase, evident only in the last year of his life, he added the dimension of a new version of Sacred History that became the framework for his Second Inaugural Address" (p. 50).

Kesler's simple story of how the Founders' Nature was replaced by the Progressives' History ignores Ceaser's complex story of the synthesis of Nature and History in American political development.

Finally, Melnick's third claim is that Kesler's simple story promotes Anton's dangerous extremism in which it's a battle between good and evil, and since the evil ones are going to crash the plane and kill us all, our only choice is to put Donald Trump in the cockpit, even though he is unfit to fly the plane, because at least he is the enemy of our enemy.

Remarkably, in their recent articles for the Claremont Review, Kesler along with Andrew Busch and William Voegeli now admit that Trump was a bad president because he is a bad man, and thus they disagree with Anton who continues to defend Trump.  In his response to Melnick, Ellmers says nothing about this.

Thursday, April 08, 2021

Albert Somit (1919-2020): The Striving for a Biopolitical Science

 

                                                                            Albert Somit


Albert Somit died on August 2, 2020, at the age of 100.  He was one of the most distinguished political scientists of his generation.  His distinctive contribution to political science was in applying biology to politics and thus promoting "politics and the life sciences" or "biopolitics," which has shaped my thinking since the late 1970s.  There is a good obituary that tells the story of his life.

Shortly before his death, he finished writing his last professional paper (co-authored with Steven Peterson) for presentation at the 2020 virtual convention of the American Political Science Association, with the title "Political Science and Biology: Then, Now, and Next."  This is a valuable survey of the history of biopolitics within the discipline of political science, which helps me to think about this as a striving for what I call "biopolitical science."  (You can request a copy of this paper from Steven Peterson at sap12@psu.edu)

Although they do not develop the idea, Somit and Peterson point to biopolitical science as I understand it when they say that by the 1960s political science was not yet a "real science," because it lacked an "overarching theory" or "paradigm" or "Big Idea," and that a biological science of politics could provide the paradigmatic intellectual framework that was needed.  This could succeed where other proposed frameworks--such as power, systems theory, structural-functionalism, and rational choice theory--had failed (4).  If this were to happen, "biopolitics would no longer be seen as a special, narrow part of political science--but a part of every field in the discipline, integrated into the larger world of the study of politics" (26).

Political science could become a true science, I have argued, by becoming a biopolitical science of political animals.  This science would be both Aristotelian and Darwinian.  It would be Aristotelian in fulfilling Aristotle's original understanding of political science as the biological study of the political life of human beings and other political animals.  It would be Darwinian in employing Charles Darwin's evolutionary theory as well as modern advances in Darwinian biology to explain political behavior as shaped by genetic evolution, cultural evolution, and individual judgment.  To illustrate how such a biopolitical science could account for the course of political history, I have shown how such a science could deepen our understanding of one of the crucial turns in American political history--Abraham Lincoln's Emancipation Proclamation of January 1, 1863.  I have written a post about this.  But actually this entire blog is devoted to developing this biopolitical science.

The idea of a biopolitical science has seemed ridiculous to most political scientists (and to social scientists generally), Somit and Peterson explain, because they agree with Emile Durkheim's dictum that "human social behavior was socially acquired," which means that human social behavior is culturally learned rather than biologically innate.  This is what John Tooby and Leda Cosmides have called the Standard Social Science Model (SSSM)--the belief that human social life is purely cultural or socially learned and thus transcends biology, that it's more nurture than nature (4).  

Somit and Peterson see "biopolitics" as the name for those few political scientists who began in the 1960s and 1970s to challenge the cultural determinism of the SSSM by showing that biological factors do have some influence on human politics and social behavior.  The idea here, Somit and Peterson say, is "that political scientists should give proper weight to the role played by Nature, as well as by Nurture, in shaping our social and political behavior."  "The most powerful factor shaping Homo politicus is our species' genetic legacy as social primates.  That legacy, together with socialization, influences almost every aspect of our social, political, economic, and cultural life" (11).

But notice that they accept the Nature/Nurture dichotomy and the separation of biological science as explaining our "genetic legacy" from cultural studies as explaining our "socialization."  They assume that cultural history and social learning are not part of biology.  Thus they implicitly deny the comprehensiveness of biopolitical science--as I understand it--because they deny that a biological science of politics can explain not only the genetic nature but also the cultural history of political behavior.  They identify the biological science of ethology or animal behavior as an important part of biopolitics, but they fail to recognize how studying the cultural history and biographical history of social animals has become a crucial part of ethology.  So, for example, Jane Goodall's Chimpanzees of Gombe is a study not only of the genetic universals of chimpanzee behavior but also of the cultural history and individual personalities of the chimpanzees at Gombe.  (I have written about the biological study of animal cultures and animal personalities here and here.)

Somit and Peterson rightly give prominence to John Hibbing and his colleagues in their promotion of genopolitics and the psychophysiology of political ideology.  Somit and Peterson recognize that there is a serious weakness in this research in that it has failed to be replicated, which suggests that it is deeply flawed.

But they do not recognize the fundamental problem with this approach to biopolitics: it works with unduly simplified models of genes and neurobiology that cannot capture the emergent complexity of political behavior as the product of many interacting causes and levels of analysis.  I have suggested that a more complex version of biopolitics would have to move through at least six dimensions of political evolution:

     1. genetic evolution
     2. epigenetic evolution
     3. the behavioral evolution of culture
     4. the symbolic evolution of culture
     5. ecological evolution
     6. the individual life history and judgment of political agents

That's what I mean by biopolitical science.

I have written about this here and here.

I should also say that I have been free to devote a good part of my intellectual career to developing this idea of biopolitical science because I was fortunate to be part of the "Politics and Life Sciences" program in the Department of Political Science at Northern Illinois University.  Somit and Peterson recognize this as the only program of its kind--founded in the early 1980s and coming to an end in 2012.  I joined the program at its start in 1983 and stayed there until my retirement in 2012.

There were two unique features of this program at NIU.  First, this was the only Ph.D. program in political science anywhere in which Politics and the Life Sciences was a graduate field of study.  Second, the undergraduate program included some courses that were cross-listed as both political science and biology courses; so that these courses enrolled both biology majors and political science majors, which promoted good interdisciplinary class discussions.  These courses were popular with biology majors who wanted to think about the broad humanistic implications of biology beyond the narrow constraints of their regular biology classes.  I team-taught one of these courses with a biology professor (Neil Blacksone).

As Somit and Peterson indicate, in recent years a Ph.D. specialty in biopolitics has been established at the University of Nebraska at Lincoln.

Monday, April 05, 2021

If Good Brains Support Morality, Do Bad Brains Support Immorality?

On August 1, 1966, twenty-five-year-old Charles Whitman went to the top of the University of Texas Tower carrying guns and ammunition.  Having earned a sharpshooter's badge in the Marines, he was an excellent marksman.  For over 90 minutes, he shot at anyone he saw.  He killed 13 people that day before he was killed by Austin police.  This was the worst school shooting in American history until the shooting at Virginia Tech in 2007.  The night before he went to the Tower, he killed his mother and his wife.

Those who knew Whitman were shocked, because he had always appeared to be a talented young man with good character.  He earned his Eagle Scout Badge when he was only 12 years old, which made him one of the youngest boys to earn that honor in the history of the Boy Scouts.  But then as he neared his 25th birthday, he changed.

Whitman had been going to doctors and psychiatrists with complaints that something was wrong with him mentally--that he felt overly aggressive and had thoughts of killing people.  He also felt tremendous pain in his head.  In the suicide note that he left, he asked that there should be an autopsy of his brain to see what was wrong, which might improve the scientific understanding of the biological causes of mental illness leading to violence.

Texas Governor John Connolly appointed a commission of experts to study the causes of Whitman's behavior.  They found evidence of a tumor the size of a pecan pressing on his amygdala.  They said that while it was possible that this had contributed to his violent emotions, there was not enough scientific understanding of how brain lesions like this influence thought and behavior to reach any firm conclusion that this was a contributing cause for his murderous actions.

Scientists have continued to debate this.  Some have seen evidence that his brain tumor was probably a partial cause of his crime.  Others have pointed to many other possible causes from his life experience.  Whitman's father had physically and emotionally abused him and his mother.  His mother was forced to run away from his father and move to Austin to be close to her son.  These and other psychological stressors could have led to Whitman's mental break driving him to his violent self-destructive behavior.  Far from being the cause of his criminal violence, his brain tumor might have been only a coincidental occurrence. 

And yet there are some cases of people with no history of criminal propensities who become criminal shortly after suffering brain lesions, which suggests some causal connection between the crime and the brain disorder. 

If we understood how certain kinds of brain damage might increase the probability of criminal behavior, could that help us in predicting and punishing such behavior?  Does criminality become less blameworthy when it is at least partially caused by neurological disorders?  Or should we say that as long as someone like Whitman fully understands what he is doing and chooses to do it, even as he knows that it is wrong, his blameworthiness is not reduced?

And if we understood what went wrong in Whitman's brain to create his criminal mind, could this also help us understand what must go right in a normal brain to create a moral mind?

It should be easier today than it was in 1966 to answer these questions, because since the 1990s, the technology of brain scanning--particularly, MRI--has allowed us for the first time to see images of  the structure and functioning of both criminal minds and moral minds in the brain.  We now have many case studies of people who have had damage in particular parts of the brain, who then show criminal behavior sometime after that damage, and the brain scans can identify the areas of the brain that have been damaged.

There are, however, at least four problems in these studies.  The first is the problem of individual differences.  Most people who have damage to the same part of the brain do not become criminals.  So there must be factors other than brain damage that vary for different individuals with different genetic propensities, different neural circuitry, and different life histories that explain why some become criminals, and others do not.  For example, Phineas Gage suffered massive damage to his ventromedial prefrontal cortex, and while people with that kind of brain lesion often become criminals, Gage did not.

The second problem is that the lesions that seem to cause criminality occur in several different parts of the brain.  While lesions in the prefrontal cortex are most commonly associated with criminal behavior, lesions in other parts of the brain are sometimes associated with criminality.  Similarly, while the normal functioning of the prefrontal cortex seems to be required for good moral judgment, the neuroscientific study of morality has identified many other areas of the brain that contribute to moral experience.

The third problem is that the plasticity of the brain allows the brain to reorganize its neural circuitry after damage has occurred, so that healthy parts of the brain can take on some of the functionality that was lost in the damaged part. 

The fourth problem is that it is not clear how the correlation between brain damage and criminality should influence our legal standards of criminal responsibility and punishment.  Does neuroscience promote a deterministic explanation of human behavior that denies the free will presupposed in the law?  Or can our concepts of moral responsibility and free will be seen as compatible with neuroscientific explanations of criminal behavior?

Before thinking through those problems, let's review a few case histories illustrating how brain lesions can be connected to criminality.


TWO CASES OF EARLY-ONSET PFC DAMAGE

Antonio Damasio and his colleagues have reported two cases of young adults with impaired social and moral behavior apparently caused by early prefrontal cortex lesions occurring before they were 16 months old (Anderson et al. 1999).  When the researchers first saw them, subject A was a 20-year-old woman, and subject B was a 23-year-old man.  Both had been raised in stable, middle-class homes with college-educated parents who were attentive to their children.  Both patients had socially well-adapted siblings who were normal in their behavior.  

Patient A had been run over by a vehicle when she was 15 months old.  She recovered quickly.  When she was three years old, her parents noticed that she did not respond to verbal or physical punishment.  She became ever more disruptive over her childhood, until she reached age 14, and she was placed in a special treatment center.  She stole from her family and was arrested repeatedly for shoplifting.  She gave birth to a baby at age 18, but she showed no interest in caring for the child.  She never sought employment.  When jobs were arranged for her, she was soon fired for being undependable and disruptive in the workplace.  She became completely dependent on her family and social agencies for financial support and management of her life.  She never expressed guilt for her misconduct.  She blamed other people for her problems.

Patient B had had a right frontal tumor surgically removed at age three months.  He recovered, and he showed normal development during his early childhood.  But at age nine, he showed remarkably flat emotions combined with occasional explosive outbursts of anger.  After graduating from high school, his behavioral problems intensified.  He could not hold a job.  He frequently engaged in violent assaults.  He became a petty thief.  He fathered a child, but provided no paternal care.  He expressed no guilt for his misbehavior.

Neuropsychological evaluations of both patients showed that they had normal intellectual ability.  In this, they were like patients with adult-onset lesions of the frontal cortices in that their immoral conduct could not be explained by their lacking mental ability.  This is what Damasio identifies as the refutation of Immanuel Kant's claim that moral judgment is a purely rational activity.

The neuroimaging studies of these two patients showed that both had damage to prefrontal regions of the brain, with no evidence of damage in other areas.  The lesion in subject A was bilateral--with damage in both the left and right polar and ventromedial prefrontal cortices.  The lesion in subject B was unilateral--in the right prefrontal region.

When they were presented with verbal scenarios of social dilemmas and interpersonal conflicts, both patients failed to identify the primary issues in these dilemmas and failed to propose ways to resolve conflicts.  In this, they differed greatly from patients with adult-onset prefrontal lesions, who have a factual knowledge of social rules applied to verbal scenarios, although they have no emotional commitment to these rules in their own real life situations.  So it seemed that while the adult-onset patients had at least learned the social norms of good conduct before their brains were damaged, although they were unable to obey those social norms in their own lives, the early-onset patients had never learned those social norms at all.

Patients A and B were also tested for their ability to make decisions that are personally advantageous to them.  They participated in the Iowa Gambling Experiment, which was designed by Damasio's student Antoine Bechara to be a lifelike simulation of how human beings must make decisions in the face of uncertainty, in which we weigh likely gains and losses as we seek a personally advantageous future in which our net gains exceed our net losses.  The Player sits in front of four decks of cards labeled A, B, C, and D.  The Player is given a loan of $2,000 and told that the goal of the game is to lose as little as possible of the loan and to make as much extra money as possible.

The Player turns cards, one at a time, from any of the four decks, until the experimenter says to stop.  The Player is told that turning each card will result in earning a sum of money, and that occasionally turning a card will result in both earning some money and having to pay some money to the experimenter.  The amount to be earned or paid is not known until the card is turned.  The Player is not allowed to keep written notes to tally how much has been earned or paid at any point.

The turning of any card in decks A and B pays $100, while the turning of any card in decks C and D only pays $50.  For every 10 cards turned over in decks A and B, at least one card will require a high payment, with a total loss of $1,250.  For every 10 cards turned over in decks C and D, at least one card will require a much lower payment, with a total loss of $250.  Consequently, over the long term, decks A and B are disadvantageous because they cost more (a loss of $250 in every 10 cards); and decks C and D are advantageous because bring an overall gain (a gain of $250 in every 10 cards).

Players cannot predict exactly the gains and losses in their play of the cards, but normally players can guess that the high gain/high risk decks--A and B--are the "bad" decks, and the low gain/low risk decks--C and D--are the "good" decks that will yield the highest payoffs in the long run.  But patients who have suffered damage to the ventromedial prefrontal cortex prefer to pick cards from decks A and B, and because of the the high penalties they incur, they are bankrupt halfway into the game.  This is what patients A and B did when they played the game:  they chose the high gain/high risk decks, although they must have known that this would be bad for them in the long run.

We see here the normal human tendency to be more concerned with the present than with the future--to choose what gives us high gains in the present even though this will bring high losses in the future.  But normally morally mature human beings learn to exercise prudent self-control in overcoming this tendency by choosing the low gain/low risk returns in the present if that is likely to lead to higher gains in the future.  Those with frontal lobe damage, however, seem to have an exaggerated tendency to go for the present high reward rather than bank on the future.  So what's wrong with them?

Damasio's answer is based on his "somatic marker hypothesis"--the idea that good decision-making about what is personally advantageous and socially appropriate is guided by moral emotions in the mind that are rooted in the visceral feelings of the body, and that these somatic markers are processed in the ventromedial prefrontal cortex (vmPFC) and the amygdala.  Frontal lobe patients have all of the intellectual capacities--such as working memory, attention, and language--required for decision-making, but they do not feel the somatically marked emotions necessary for motivating good decisions.  These patients suffer from "acquired psychopathy," because they are like psychopaths in that they know the difference between right and wrong, but they don't care--they don't feel those moral emotions like guilt, shame, and regret that normally motivate human beings to do what is right and avoid what is wrong.

Antonio and Hanna Damasio decided to test this by using a polygraph to monitor skin conductance response--also called electrodermal activity--while people are playing the Iowa Gambling Game, because skin conductance response measures unconscious neurophysiological arousal (Bechara et al. 1996; Damasio 1994).  When we feel a strong emotion, our autonomic nervous system slightly increases the secretion from our skin's sweat glands.  Usually, this increase is too small for us to notice it.  But it can be detected by using a pair of electrodes connected to the skin and a polygraph.  The slight increase in sweat reduces the resistance to the passage of an electrical current.  And so if a low-voltage electrical current is passed between the two electrodes, the polygraph can detect the change in the amount of current conducted.

As measured by this skin conductance response, both normal people and frontal lobe patients showed emotional arousal a few seconds after turning over a card and seeing the reward or punishment.  It was also found that as the game continued, in the time immediately before they selected a card from a bad deck, normal people showed a skin conductance response, indicating that their bodies were generating an unconscious signal about the badness of the deck, and the magnitude of this signal increased over the course of the game.  Normal people did not show this at the start of the game.  This was a response they had to learn while playing the game: their brains were signaling a warning about the likely bad future consequences of selecting cards from the bad decks.

But the frontally damaged patients did not show any anticipatory skin conductance responses!  Their brains were not sending any visceral predictive warning about a likely bad future outcome from selecting from the bad decks.  Even if they knew they were making a bad choice, they did not feel how bad it would be for them.  Even if they were as capable as normal people of making a cognitive estimate of the badness of their choice, the frontally damaged patients did not feel the somatic alarm signal that motivated normal people to avoid a bad choice.  Here, again, we see how good moral judgment requires not just pure reason or pure emotion but the interaction of both moral reason and moral emotion.  Knowing what is good for us is not good enough if we do not feel it.

Notice also here that Damasio assumes a broad conception of moral judgment as concerned not just with what is socially appropriate but also with what is personally advantageous.  A lot of the neuroscientific studies of moral psychology identify morality with what is good for society, and thus seem to assume that what is good for the individual is a matter of selfish interest beyond morality.  But Damasio's use of the Iowa Gambling Experiment is a test of how prudent individuals are in choosing what is good or advantageous for themselves as individuals.  Thus, Damasio agrees with the traditional conception of Aristotle and others that prudence--the correct choice of what is good for oneself--is a moral virtue, even the supreme virtue, and that morality generally is self-perfective or self-regarding.  But since we are social animals, what is good for us individually includes the social good.  (I have written previously about the morality of prudence here.)


THREE CASES OF ADULT-ONSET PFC DAMAGE

Elliot was 35 years old when Damasio first met him.  In Descartes' Error, Damasio called him "A Modern Phineas Gage," because he had suffered damage to his frontal lobes as a young adult just like Gage; and like Gage this caused a radical change in his personality.

Elliot had been a good husband and father.  He had had a successful professional career working with a business firm.  He was admired by his younger siblings and his colleagues.  But then something happened to him that changed him.  He began to have severe headaches.  He could not concentrate.  He could not complete his work projects.  He seemed to have lost his sense of responsibility.

Elliot's doctors discovered that he had a brain tumor the size of a small orange that was pressing against both frontal lobes.  Although the tumor was not malignant, its growth was destroying brain tissue.  The tumor and the damaged frontal lobe tissue had to be surgically removed.

His physical and cognitive recovery from the surgery seemed good.  He was walking and speaking like normal.  He was just as smart as he had always been.  But his family and friends noticed that his personality had changed.  As Damasio said, "Elliot was no longer Elliot."

He was so poor at organizing his work schedule that he was fired from his job.  He lost other jobs as well.  He invested all of his savings in foolish business ventures that ended in bankruptcy.

His wife divorced him.  He married again and then was divorced a second time.  He drifted around with no source of income.  When Damasio first saw him, he was living under the care of a sibling.

Elliot was intelligent.  He had a good memory.  He had a great fund of knowledge about the world and about what was happening in his life.  But his life was chaotic because he could not make good decisions about his life, and he could not plan for the future.  One could conclude, Damasio observed, that as was the case for Gage, "his free will had been compromised" (Damasio 1994, 38).

MRI studies of Elliot's brain revealed that he had the same brain damage as Gage--in the ventromedial areas of the prefrontal cortices.  These are the part of the brain identified by Damasio as necessary for practical reasoning and decision making.

Standardized tests revealed that Elliot had a superior intellect.  His mental capacities for perception, memory, learning, language, attention, and mathematics were all good.  But even with all of these intellectual abilities, he still could not make good decisions about his personal and social life.

The problem with Elliot, Damasio finally realized, was not in the character of his intelligence but in his emotions--or rather in the absence of emotions.  Elliot could recount all of the tragic events in his life with an attitude of calmness, as if he were a dispassionate spectator of his own life.  He knew that his life had been ruined, but he felt nothing about it.

To test this emotional flatness, Damasio put Elliot through a series of psychophysiological experiments.  He was shown images of emotionally charged visual stimuli--such as houses burning or collapsing in earthquakes, or people injured in gory accidents or drowning in floods--and he felt no emotion.  Normally, when people see such emotional images, they show a strong skin conductance response.  But Elliot showed no skin conductance response at all--just like others with frontal damage.  In fact, he said that he knew that before his brain damage, he would have felt some deep emotions in response to such images, but now he could not feel those emotions, although he understood that he should feel them.

Elliot was presented with a long series of hypothetical scenarios of ethical dilemmas, financial decisions, and social problems; and then he was asked to generate solutions.  He was very good at this.  But then at the end of one session, after he had come up with lots of possible choices for action, he remarked: "And after all this, I still wouldn't know what to do!"

He could think of many hypothetical solutions to hypothetical problems, but he still could not decide what to do in real life situations.  His impairment was not a lack of social knowledge or understanding but a lack of emotional reactivity that would give motivational weight to his choices in real life.  As Damasio said, "the cold-bloodedness of Elliot's reasoning prevented him from assigning different values to different options, and made his decision-making landscape hopelessly flat" (Damasio 1994, 51).

When Elliot played the Iowa Gambling Game, he was fully engaged, and he clearly wanted to win.  But like other frontal damage patients, he could not resist taking cards from the "bad" decks; and he showed no anticipatory skin conductance response prior to choosing the "bad" decks.  Even after playing the game repeatedly, he could not correct his mistakes.

Although Elliot's case is rare, there are a few other well-studied cases of adult-onset PFC damage followed by criminal or otherwise deviant behavior.  For example, Christina Meyers and her colleagues have reported the case of a man they name J.Z., who had a brain tumor removed in 1962 at the age of 33, which damaged his left orbital frontal lobe.  He suffered a change of personality like that of Elliot: he seemed to show something similar to psychopathic personality or antisocial personality disorder (Meyers et al. 1992).

Before his surgery in 1962, J.Z. was a stable and reliable husband, father, and worker.  He had worked at the same clothing store for many years.  After the surgery, his behavior at work and at home became disordered and disruptive.  He lost his job, and he never again had steady employment.  He lost most of his family's savings in wild business deals.  His wife divorced him.  When he reported for a neuropsychological evaluation at Baylor College of Medicine in 1987, he was 58 years old, unemployed, and living with his mother.

Speaking during his evaluation, J.Z. "freely reported being involved in criminal activities and said he had three billion dollars hidden away in West Germany" (Meyers et al. 1992, 123).  But to me this sounds so ridiculously boastful that his talk about "criminal activities" sounds dubious.

Meyers and her colleagues decided that his personality showed the traits of "antisocial personality disorder."  According to the American Psychiatric Association's Diagnostic and Statistical Manual-III-R of 1987, an adult can be identified with this disorder if he has at least four of the following 10 traits: (1) lack of consistent work behavior; (2) non-conformance to usual social norms; (3) tendency to be irritable and aggressive, (4) repeated problems honoring financial obligations, (5) failure to plan ahead or impulsive behavior, (6) untruthfulness, (7) recklessness with regard to personal safety; (8) inability to function as a responsible parent; (9) inability to sustain a monogamous relationship for more than one year; and (10) lack of remorse.  Meyers and her team saw at least 5 of these traits in J.Z.--1, 4, 5, 6, and 10.

J.Z. did not, however, satisfy one crucial criterion for this antisocial personality disorder: this disorder had not started in his childhood.  So like Damasio, Meyers calls this acquired antisocial personality disorder, as distinguished from developmental antisocial personality disorder.

Robert Blair and Lisa Cipolotti (2000) have reported a similar case of acquired antisocial personality disorder after frontal lobe damage.  J.S. was a 56-year-old man in 1996 when he suffered trauma to the right frontal region of his brain.  A CT brain scan showed damage to the right orbitofrontal cortex and to the left amygdala.  Prior to this, he had worked successfully as an electrical engineer.  He was known as a quiet man who was never aggressive.  But after the brain injury, he became aggressively violent.  When he was in a rehabilitation hospital, he assaulted the nurses.  He frequently damaged property.  Like J.Z., J.S. satisfied some of the criteria in the DSM for antisocial personality disorder.


THE MATRICIDAL DAUGHTER

Charles Whitman murdered his mother.  The killing of a mother by her child is rare.  But when it does happen, the killer is almost always a son rather than a daughter.  So the story of the woman in Chile with adult-onset PFC damage who murdered her mother is surprising.

When Gricel Orellana and her colleagues first saw this woman at a hospital in Chile in 2009, she was 64 years old, and she had recently tried to kill a relative by poisoning her and then attempting to drown her in a bathtub.  This woman had had auditory hallucinations that God was commanding her to murder her relative.  As recommended by a forensic psychiatrist, a court declared her "not guilty by reason of insanity," and remanded her to psychiatric care (Orellana et al. 2013).

Amazingly, the court had made the same ruling--not guilty by reason of insanity--only two years earlier when she had murdered her mother.  She had tried unsuccessfully to strangle her mother with a scarf, and then the next day she drowned her in a bathtub.  She had followed her religious hallucinations telling her to kill her mother as a sacrifice to God.

This woman's shocking behavior had begun in 1985, when she was 40 years old, after she had had surgery to remove nasal polyps, and the surgery damaged her right ventromedial prefrontal cortex.  Before 1985, her life was normal, and she showed no unusual behavior, although she fought with her mother constantly.  After the surgery, her personality changed radically.  Her behavior became so disruptive that she could not maintain any stable social relationships.  She could not keep any regular jobs.

In 1993, she developed visual and auditory hallucinations with religious messages, which included God's command to kill her mother.  A psychiatrist diagnosed her as suffering from paranoid schizophrenia.

In 2009, a MRI of her brain confirmed that she had damage to the right ventromedial prefrontal cortex, which apparently had come from her 1985 surgery.  Like some of the other PFC lesion patients, this could have caused her "acquired psychopathy."

Orellana and her colleagues administered the same tests that Damasio had used with Elliot, including the Iowa Gambling Task, and they found the same evidence of poor decision-making and emotional flatness that is characteristic of psychopaths.


LESION NETWORK LOCALIZATION OF CRIMINAL BEHAVIOR

These 6 cases of frontal lobe damage followed by personality changes that resemble antisocial personality disorder are included in the 17 brain lesion cases associated with criminal behavior studied by Ryan Darby and his colleagues (Darby et al. 2018).  Although the most common lesion location was the vmPFC/orbitofrontal cortex, in at least seven of these 17 cases, the brain damage did not extend into these areas.  Three of the lesions were in the medial temporal lobe and amygdala, three in the anterior lateral temporal lobe, one in the dorsomedial prefrontal cortex, and one in the ventral striatum.

Darby and his colleagues suspected that the behavioral impairments caused by these lesions resulted not so much from damage to any one particular region itself but from the disruption of the connections between brain regions.  We can explain criminality as caused by some impairment of the brain's normal capacity for moral judgment.  The neuroscientific study of morality has shown that the neural basis for moral judgment cannot be located in any one area of the brain, because the "moral brain" is actually a large functional network connecting many different areas of the brain (Fumagalli and Priori 2006; Greene and Young 2020; Mendez 2009; Young and Dungan 2012).  As expected, Darby's group was able to show that all of the lesions were functionally connected to the same network of brain regions for moral judgment--including regions involved in morality, value-based decision making, and theory of mind.


UNNATURAL FREE WILL, NATURAL DETERMINISM, AND NATURAL FREEDOM

So what does this teach us about whether we can hold people morally and legal responsible for their criminal behavior?  As I have indicated in a previous post, there are at least three possibilities.  First, we might argue that no matter what the biological science of natural causality claims, we have a "free will" to exercise a supernatural or immaterial freedom of will that is an uncaused cause of our thinking and acting, and without this, we have no grounds for holding people responsible for their behavior.

Second, we might argue that neuroscience and the other biological sciences show that all human thinking and acting is determined by natural biological causes; and so "free will" is illusory, and we cannot hold people responsible for what they do, because they had no choice.

The third possibility is somewhere in between these two extremes.  How we think and act is not compelled by natural causes.  But neither can we exercise "free will" understood as some spiritual or immaterial power that is an uncaused cause acting outside and beyond natural causality.  We have the power to act as we choose regardless of the cause of the choice.  The fact that all of our behavior is caused does not mean that it is compelled.  When we freely choose to think or act, what we do has been caused by our beliefs and desires, but this causation is not compulsion, and so we can be held legally or morally responsible for this.

Sometimes people are compelled by biological causes to behave in ways that they have not freely chosen.  So we might agree that the woman in Chile who killed her mother suffered from some form of paranoid psychosis: she heard the voice of God commanding her to kill her mother.  We might agree that she was innocent by reason of insanity, because she could not distinguish right from wrong.

But most of us most of the time have enough freedom of choice that we can be held responsible for our behavior.  Even most of those people with psychopathic brains do not become criminals.  Gage and Elliot might have had "acquired psychopathy" because of their frontal lobe damage, but they did not become violent criminals.


REFERENCES

Anderson, Steven W., Antoine Bechara, Hanna Damasio, Daniel Tranel, and Antonio Damasio. 1999. "Impairment of Social and Moral Behavior Related to Early Damage in Human Prefrontal Cortex." Nature Neuroscience 2: 1032-1037.

Blair, Robert J. R., and Lisa Cipolotti. 2000. "Impaired Social Response Reversal: A Case of 'Acquired Sociopathy.'" Brain 123: 1122-1141.

Damasio, Antonio. 1994. Descartes' Error: Emotion, Reason, and the Human Brain. New York: G. P. Putnam's Sons.

Darby, R. Ryan, Andreas Horn, Fiery Cushman, and Michael D. Fox. 2018. "Lesion Network Localization of Criminal Behavior." Proceedings of the National Academy of Sciences 115: 601-606.

Fumagalli, Manuela, and Alberto Priori. 2012. "Functional and Clinical Neuroanatomy of Morality." Brain 135: 2006-2021.

Greene, Joshua, and Lianne Young. 2020. "The Cognitive Neuroscience of Moral Judgment and Decision-Making." In David Poeppel, George Mangun, and Michael Gazzaniga, eds., The Cognitive Neurosciences, 1003-1013. Cambridge: MIT Press.

Mendez, Mario F. 2009. "The Neurobiology of Moral Behavior." CNS Spectr. 14: 608-620.

Meyers, Christiana, Stephen Berman, Randall Scheibel, and Anne Hayman. 1992. "Case Report: Acquired Antisocial Personality Disorder Associated with Unilateral Left Orbital Frontal Lobe Damage."  Journal of Psychiatric Neuroscience 17: 121-25.

Orellana, Gricel, Luis Alvarado, Carlos Munoz-Neira, Rodrigo Avila, Mario Mendez, and Andrea Slachevsky.  2013. "Psychosis-Related Matricide Associated with a Lesion of the Ventromedial Prefrontal Cortex." Journal of the American Academy of Psychiatry and Law 41: 401-406,

Young, Liane, and James Dungan. 2012. "Where in the Brain Is Morality?  Everywhere and Maybe Nowhere." Social Neuroscience 7: 1-10.