Thursday, April 08, 2021

Albert Somit (1919-2020): The Striving for a Biopolitical Science

 

                                                                            Albert Somit


Albert Somit died on August 2, 2020, at the age of 100.  He was one of the most distinguished political scientists of his generation.  His distinctive contribution to political science was in applying biology to politics and thus promoting "politics and the life sciences" or "biopolitics," which has shaped my thinking since the late 1970s.  There is a good obituary that tells the story of his life.

Shortly before his death, he finished writing his last professional paper (co-authored with Steven Peterson) for presentation at the 2020 virtual convention of the American Political Science Association, with the title "Political Science and Biology: Then, Now, and Next."  This is a valuable survey of the history of biopolitics within the discipline of political science, which helps me to think about this as a striving for what I call "biopolitical science."  (You can request a copy of this paper from Steven Peterson at sap12@psu.edu)

Although they do not develop the idea, Somit and Peterson point to biopolitical science as I understand it when they say that by the 1960s political science was not yet a "real science," because it lacked an "overarching theory" or "paradigm" or "Big Idea," and that a biological science of politics could provide the paradigmatic intellectual framework that was needed.  This could succeed where other proposed frameworks--such as power, systems theory, structural-functionalism, and rational choice theory--had failed (4).  If this were to happen, "biopolitics would no longer be seen as a special, narrow part of political science--but a part of every field in the discipline, integrated into the larger world of the study of politics" (26).

Political science could become a true science, I have argued, by becoming a biopolitical science of political animals.  This science would be both Aristotelian and Darwinian.  It would be Aristotelian in fulfilling Aristotle's original understanding of political science as the biological study of the political life of human beings and other political animals.  It would be Darwinian in employing Charles Darwin's evolutionary theory as well as modern advances in Darwinian biology to explain political behavior as shaped by genetic evolution, cultural evolution, and individual judgment.  To illustrate how such a biopolitical science could account for the course of political history, I have shown how such a science could deepen our understanding of one of the crucial turns in American political history--Abraham Lincoln's Emancipation Proclamation of January 1, 1863.  I have written a post about this.  But actually this entire blog is devoted to developing this biopolitical science.

The idea of a biopolitical science has seemed ridiculous to most political scientists (and to social scientists generally), Somit and Peterson explain, because they agree with Emile Durkheim's dictum that "human social behavior was socially acquired," which means that human social behavior is culturally learned rather than biologically innate.  This is what John Tooby and Leda Cosmides have called the Standard Social Science Model (SSSM)--the belief that human social life is purely cultural or socially learned and thus transcends biology, that it's more nurture than nature (4).  

Somit and Peterson see "biopolitics" as the name for those few political scientists who began in the 1960s and 1970s to challenge the cultural determinism of the SSSM by showing that biological factors do have some influence on human politics and social behavior.  The idea here, Somit and Peterson say, is "that political scientists should give proper weight to the role played by Nature, as well as by Nurture, in shaping our social and political behavior."  "The most powerful factor shaping Homo politicus is our species' genetic legacy as social primates.  That legacy, together with socialization, influences almost every aspect of our social, political, economic, and cultural life" (11).

But notice that they accept the Nature/Nurture dichotomy and the separation of biological science as explaining our "genetic legacy" from cultural studies as explaining our "socialization."  They assume that cultural history and social learning are not part of biology.  Thus they implicitly deny the comprehensiveness of biopolitical science--as I understand it--because they deny that a biological science of politics can explain not only the genetic nature but also the cultural history of political behavior.  They identify the biological science of ethology or animal behavior as an important part of biopolitics, but they fail to recognize how studying the cultural history and biographical history of social animals has become a crucial part of ethology.  So, for example, Jane Goodall's Chimpanzees of Gombe is a study not only of the genetic universals of chimpanzee behavior but also of the cultural history and individual personalities of the chimpanzees at Gombe.  (I have written about the biological study of animal cultures and animal personalities here and here.)

Somit and Peterson rightly give prominence to John Hibbing and his colleagues in their promotion of genopolitics and the psychophysiology of political ideology.  Somit and Peterson recognize that there is a serious weakness in this research in that it has failed to be replicated, which suggests that it is deeply flawed.

But they do not recognize the fundamental problem with this approach to biopolitics: it works with unduly simplified models of genes and neurobiology that cannot capture the emergent complexity of political behavior as the product of many interacting causes and levels of analysis.  I have suggested that a more complex version of biopolitics would have to move through at least six dimensions of political evolution:

     1. genetic evolution
     2. epigenetic evolution
     3. the behavioral evolution of culture
     4. the symbolic evolution of culture
     5. ecological evolution
     6. the individual life history and judgment of political agents

That's what I mean by biopolitical science.

I have written about this here and here.

I should also say that I have been free to devote a good part of my intellectual career to developing this idea of biopolitical science because I was fortunate to be part of the "Politics and Life Sciences" program in the Department of Political Science at Northern Illinois University.  Somit and Peterson recognize this as the only program of its kind--founded in the early 1980s and coming to an end in 2012.  I joined the program at its start in 1983 and stayed there until my retirement in 2012.

There were two unique features of this program at NIU.  First, this was the only Ph.D. program in political science anywhere in which Politics and the Life Sciences was a graduate field of study.  Second, the undergraduate program included some courses that were cross-listed as both political science and biology courses; so that these courses enrolled both biology majors and political science majors, which promoted good interdisciplinary class discussions.  These courses were popular with biology majors who wanted to think about the broad humanistic implications of biology beyond the narrow constraints of their regular biology classes.  I team-taught one of these courses with a biology professor (Neil Blacksone).

As Somit and Peterson indicate, in recent years a Ph.D. specialty in biopolitics has been established at the University of Nebraska at Lincoln.

Monday, April 05, 2021

If Good Brains Support Morality, Do Bad Brains Support Immorality?

On August 1, 1966, twenty-five-year-old Charles Whitman went to the top of the University of Texas Tower carrying guns and ammunition.  Having earned a sharpshooter's badge in the Marines, he was an excellent marksman.  For over 90 minutes, he shot at anyone he saw.  He killed 13 people that day before he was killed by Austin police.  This was the worst school shooting in American history until the shooting at Virginia Tech in 2007.  The night before he went to the Tower, he killed his mother and his wife.

Those who knew Whitman were shocked, because he had always appeared to be a talented young man with good character.  He earned his Eagle Scout Badge when he was only 12 years old, which made him one of the youngest boys to earn that honor in the history of the Boy Scouts.  But then as he neared his 25th birthday, he changed.

Whitman had been going to doctors and psychiatrists with complaints that something was wrong with him mentally--that he felt overly aggressive and had thoughts of killing people.  He also felt tremendous pain in his head.  In the suicide note that he left, he asked that there should be an autopsy of his brain to see what was wrong, which might improve the scientific understanding of the biological causes of mental illness leading to violence.

Texas Governor John Connolly appointed a commission of experts to study the causes of Whitman's behavior.  They found evidence of a tumor the size of a pecan pressing on his amygdala.  They said that while it was possible that this had contributed to his violent emotions, there was not enough scientific understanding of how brain lesions like this influence thought and behavior to reach any firm conclusion that this was a contributing cause for his murderous actions.

Scientists have continued to debate this.  Some have seen evidence that his brain tumor was probably a partial cause of his crime.  Others have pointed to many other possible causes from his life experience.  Whitman's father had physically and emotionally abused him and his mother.  His mother was forced to run away from his father and move to Austin to be close to her son.  These and other psychological stressors could have led to Whitman's mental break driving him to his violent self-destructive behavior.  Far from being the cause of his criminal violence, his brain tumor might have been only a coincidental occurrence. 

And yet there are some cases of people with no history of criminal propensities who become criminal shortly after suffering brain lesions, which suggests some causal connection between the crime and the brain disorder. 

If we understood how certain kinds of brain damage might increase the probability of criminal behavior, could that help us in predicting and punishing such behavior?  Does criminality become less blameworthy when it is at least partially caused by neurological disorders?  Or should we say that as long as someone like Whitman fully understands what he is doing and chooses to do it, even as he knows that it is wrong, his blameworthiness is not reduced?

And if we understood what went wrong in Whitman's brain to create his criminal mind, could this also help us understand what must go right in a normal brain to create a moral mind?

It should be easier today than it was in 1966 to answer these questions, because since the 1990s, the technology of brain scanning--particularly, MRI--has allowed us for the first time to see images of  the structure and functioning of both criminal minds and moral minds in the brain.  We now have many case studies of people who have had damage in particular parts of the brain, who then show criminal behavior sometime after that damage, and the brain scans can identify the areas of the brain that have been damaged.

There are, however, at least four problems in these studies.  The first is the problem of individual differences.  Most people who have damage to the same part of the brain do not become criminals.  So there must be factors other than brain damage that vary for different individuals with different genetic propensities, different neural circuitry, and different life histories that explain why some become criminals, and others do not.  For example, Phineas Gage suffered massive damage to his ventromedial prefrontal cortex, and while people with that kind of brain lesion often become criminals, Gage did not.

The second problem is that the lesions that seem to cause criminality occur in several different parts of the brain.  While lesions in the prefrontal cortex are most commonly associated with criminal behavior, lesions in other parts of the brain are sometimes associated with criminality.  Similarly, while the normal functioning of the prefrontal cortex seems to be required for good moral judgment, the neuroscientific study of morality has identified many other areas of the brain that contribute to moral experience.

The third problem is that the plasticity of the brain allows the brain to reorganize its neural circuitry after damage has occurred, so that healthy parts of the brain can take on some of the functionality that was lost in the damaged part. 

The fourth problem is that it is not clear how the correlation between brain damage and criminality should influence our legal standards of criminal responsibility and punishment.  Does neuroscience promote a deterministic explanation of human behavior that denies the free will presupposed in the law?  Or can our concepts of moral responsibility and free will be seen as compatible with neuroscientific explanations of criminal behavior?

Before thinking through those problems, let's review a few case histories illustrating how brain lesions can be connected to criminality.


TWO CASES OF EARLY-ONSET PFC DAMAGE

Antonio Damasio and his colleagues have reported two cases of young adults with impaired social and moral behavior apparently caused by early prefrontal cortex lesions occurring before they were 16 months old (Anderson et al. 1999).  When the researchers first saw them, subject A was a 20-year-old woman, and subject B was a 23-year-old man.  Both had been raised in stable, middle-class homes with college-educated parents who were attentive to their children.  Both patients had socially well-adapted siblings who were normal in their behavior.  

Patient A had been run over by a vehicle when she was 15 months old.  She recovered quickly.  When she was three years old, her parents noticed that she did not respond to verbal or physical punishment.  She became ever more disruptive over her childhood, until she reached age 14, and she was placed in a special treatment center.  She stole from her family and was arrested repeatedly for shoplifting.  She gave birth to a baby at age 18, but she showed no interest in caring for the child.  She never sought employment.  When jobs were arranged for her, she was soon fired for being undependable and disruptive in the workplace.  She became completely dependent on her family and social agencies for financial support and management of her life.  She never expressed guilt for her misconduct.  She blamed other people for her problems.

Patient B had had a right frontal tumor surgically removed at age three months.  He recovered, and he showed normal development during his early childhood.  But at age nine, he showed remarkably flat emotions combined with occasional explosive outbursts of anger.  After graduating from high school, his behavioral problems intensified.  He could not hold a job.  He frequently engaged in violent assaults.  He became a petty thief.  He fathered a child, but provided no paternal care.  He expressed no guilt for his misbehavior.

Neuropsychological evaluations of both patients showed that they had normal intellectual ability.  In this, they were like patients with adult-onset lesions of the frontal cortices in that their immoral conduct could not be explained by their lacking mental ability.  This is what Damasio identifies as the refutation of Immanuel Kant's claim that moral judgment is a purely rational activity.

The neuroimaging studies of these two patients showed that both had damage to prefrontal regions of the brain, with no evidence of damage in other areas.  The lesion in subject A was bilateral--with damage in both the left and right polar and ventromedial prefrontal cortices.  The lesion in subject B was unilateral--in the right prefrontal region.

When they were presented with verbal scenarios of social dilemmas and interpersonal conflicts, both patients failed to identify the primary issues in these dilemmas and failed to propose ways to resolve conflicts.  In this, they differed greatly from patients with adult-onset prefrontal lesions, who have a factual knowledge of social rules applied to verbal scenarios, although they have no emotional commitment to these rules in their own real life situations.  So it seemed that while the adult-onset patients had at least learned the social norms of good conduct before their brains were damaged, although they were unable to obey those social norms in their own lives, the early-onset patients had never learned those social norms at all.

Patients A and B were also tested for their ability to make decisions that are personally advantageous to them.  They participated in the Iowa Gambling Experiment, which was designed by Damasio's student Antoine Bechara to be a lifelike simulation of how human beings must make decisions in the face of uncertainty, in which we weigh likely gains and losses as we seek a personally advantageous future in which our net gains exceed our net losses.  The Player sits in front of four decks of cards labeled A, B, C, and D.  The Player is given a loan of $2,000 and told that the goal of the game is to lose as little as possible of the loan and to make as much extra money as possible.

The Player turns cards, one at a time, from any of the four decks, until the experimenter says to stop.  The Player is told that turning each card will result in earning a sum of money, and that occasionally turning a card will result in both earning some money and having to pay some money to the experimenter.  The amount to be earned or paid is not known until the card is turned.  The Player is not allowed to keep written notes to tally how much has been earned or paid at any point.

The turning of any card in decks A and B pays $100, while the turning of any card in decks C and D only pays $50.  For every 10 cards turned over in decks A and B, at least one card will require a high payment, with a total loss of $1,250.  For every 10 cards turned over in decks C and D, at least one card will require a much lower payment, with a total loss of $250.  Consequently, over the long term, decks A and B are disadvantageous because they cost more (a loss of $250 in every 10 cards); and decks C and D are advantageous because bring an overall gain (a gain of $250 in every 10 cards).

Players cannot predict exactly the gains and losses in their play of the cards, but normally players can guess that the high gain/high risk decks--A and B--are the "bad" decks, and the low gain/low risk decks--C and D--are the "good" decks that will yield the highest payoffs in the long run.  But patients who have suffered damage to the ventromedial prefrontal cortex prefer to pick cards from decks A and B, and because of the the high penalties they incur, they are bankrupt halfway into the game.  This is what patients A and B did when they played the game:  they chose the high gain/high risk decks, although they must have known that this would be bad for them in the long run.

We see here the normal human tendency to be more concerned with the present than with the future--to choose what gives us high gains in the present even though this will bring high losses in the future.  But normally morally mature human beings learn to exercise prudent self-control in overcoming this tendency by choosing the low gain/low risk returns in the present if that is likely to lead to higher gains in the future.  Those with frontal lobe damage, however, seem to have an exaggerated tendency to go for the present high reward rather than bank on the future.  So what's wrong with them?

Damasio's answer is based on his "somatic marker hypothesis"--the idea that good decision-making about what is personally advantageous and socially appropriate is guided by moral emotions in the mind that are rooted in the visceral feelings of the body, and that these somatic markers are processed in the ventromedial prefrontal cortex (vmPFC) and the amygdala.  Frontal lobe patients have all of the intellectual capacities--such as working memory, attention, and language--required for decision-making, but they do not feel the somatically marked emotions necessary for motivating good decisions.  These patients suffer from "acquired psychopathy," because they are like psychopaths in that they know the difference between right and wrong, but they don't care--they don't feel those moral emotions like guilt, shame, and regret that normally motivate human beings to do what is right and avoid what is wrong.

Antonio and Hanna Damasio decided to test this by using a polygraph to monitor skin conductance response--also called electrodermal activity--while people are playing the Iowa Gambling Game, because skin conductance response measures unconscious neurophysiological arousal (Bechara et al. 1996; Damasio 1994).  When we feel a strong emotion, our autonomic nervous system slightly increases the secretion from our skin's sweat glands.  Usually, this increase is too small for us to notice it.  But it can be detected by using a pair of electrodes connected to the skin and a polygraph.  The slight increase in sweat reduces the resistance to the passage of an electrical current.  And so if a low-voltage electrical current is passed between the two electrodes, the polygraph can detect the change in the amount of current conducted.

As measured by this skin conductance response, both normal people and frontal lobe patients showed emotional arousal a few seconds after turning over a card and seeing the reward or punishment.  It was also found that as the game continued, in the time immediately before they selected a card from a bad deck, normal people showed a skin conductance response, indicating that their bodies were generating an unconscious signal about the badness of the deck, and the magnitude of this signal increased over the course of the game.  Normal people did not show this at the start of the game.  This was a response they had to learn while playing the game: their brains were signaling a warning about the likely bad future consequences of selecting cards from the bad decks.

But the frontally damaged patients did not show any anticipatory skin conductance responses!  Their brains were not sending any visceral predictive warning about a likely bad future outcome from selecting from the bad decks.  Even if they knew they were making a bad choice, they did not feel how bad it would be for them.  Even if they were as capable as normal people of making a cognitive estimate of the badness of their choice, the frontally damaged patients did not feel the somatic alarm signal that motivated normal people to avoid a bad choice.  Here, again, we see how good moral judgment requires not just pure reason or pure emotion but the interaction of both moral reason and moral emotion.  Knowing what is good for us is not good enough if we do not feel it.

Notice also here that Damasio assumes a broad conception of moral judgment as concerned not just with what is socially appropriate but also with what is personally advantageous.  A lot of the neuroscientific studies of moral psychology identify morality with what is good for society, and thus seem to assume that what is good for the individual is a matter of selfish interest beyond morality.  But Damasio's use of the Iowa Gambling Experiment is a test of how prudent individuals are in choosing what is good or advantageous for themselves as individuals.  Thus, Damasio agrees with the traditional conception of Aristotle and others that prudence--the correct choice of what is good for oneself--is a moral virtue, even the supreme virtue, and that morality generally is self-perfective or self-regarding.  But since we are social animals, what is good for us individually includes the social good.  (I have written previously about the morality of prudence here.)


THREE CASES OF ADULT-ONSET PFC DAMAGE

Elliot was 35 years old when Damasio first met him.  In Descartes' Error, Damasio called him "A Modern Phineas Gage," because he had suffered damage to his frontal lobes as a young adult just like Gage; and like Gage this caused a radical change in his personality.

Elliot had been a good husband and father.  He had had a successful professional career working with a business firm.  He was admired by his younger siblings and his colleagues.  But then something happened to him that changed him.  He began to have severe headaches.  He could not concentrate.  He could not complete his work projects.  He seemed to have lost his sense of responsibility.

Elliot's doctors discovered that he had a brain tumor the size of a small orange that was pressing against both frontal lobes.  Although the tumor was not malignant, its growth was destroying brain tissue.  The tumor and the damaged frontal lobe tissue had to be surgically removed.

His physical and cognitive recovery from the surgery seemed good.  He was walking and speaking like normal.  He was just as smart as he had always been.  But his family and friends noticed that his personality had changed.  As Damasio said, "Elliot was no longer Elliot."

He was so poor at organizing his work schedule that he was fired from his job.  He lost other jobs as well.  He invested all of his savings in foolish business ventures that ended in bankruptcy.

His wife divorced him.  He married again and then was divorced a second time.  He drifted around with no source of income.  When Damasio first saw him, he was living under the care of a sibling.

Elliot was intelligent.  He had a good memory.  He had a great fund of knowledge about the world and about what was happening in his life.  But his life was chaotic because he could not make good decisions about his life, and he could not plan for the future.  One could conclude, Damasio observed, that as was the case for Gage, "his free will had been compromised" (Damasio 1994, 38).

MRI studies of Elliot's brain revealed that he had the same brain damage as Gage--in the ventromedial areas of the prefrontal cortices.  These are the part of the brain identified by Damasio as necessary for practical reasoning and decision making.

Standardized tests revealed that Elliot had a superior intellect.  His mental capacities for perception, memory, learning, language, attention, and mathematics were all good.  But even with all of these intellectual abilities, he still could not make good decisions about his personal and social life.

The problem with Elliot, Damasio finally realized, was not in the character of his intelligence but in his emotions--or rather in the absence of emotions.  Elliot could recount all of the tragic events in his life with an attitude of calmness, as if he were a dispassionate spectator of his own life.  He knew that his life had been ruined, but he felt nothing about it.

To test this emotional flatness, Damasio put Elliot through a series of psychophysiological experiments.  He was shown images of emotionally charged visual stimuli--such as houses burning or collapsing in earthquakes, or people injured in gory accidents or drowning in floods--and he felt no emotion.  Normally, when people see such emotional images, they show a strong skin conductance response.  But Elliot showed no skin conductance response at all--just like others with frontal damage.  In fact, he said that he knew that before his brain damage, he would have felt some deep emotions in response to such images, but now he could not feel those emotions, although he understood that he should feel them.

Elliot was presented with a long series of hypothetical scenarios of ethical dilemmas, financial decisions, and social problems; and then he was asked to generate solutions.  He was very good at this.  But then at the end of one session, after he had come up with lots of possible choices for action, he remarked: "And after all this, I still wouldn't know what to do!"

He could think of many hypothetical solutions to hypothetical problems, but he still could not decide what to do in real life situations.  His impairment was not a lack of social knowledge or understanding but a lack of emotional reactivity that would give motivational weight to his choices in real life.  As Damasio said, "the cold-bloodedness of Elliot's reasoning prevented him from assigning different values to different options, and made his decision-making landscape hopelessly flat" (Damasio 1994, 51).

When Elliot played the Iowa Gambling Game, he was fully engaged, and he clearly wanted to win.  But like other frontal damage patients, he could not resist taking cards from the "bad" decks; and he showed no anticipatory skin conductance response prior to choosing the "bad" decks.  Even after playing the game repeatedly, he could not correct his mistakes.

Although Elliot's case is rare, there are a few other well-studied cases of adult-onset PFC damage followed by criminal or otherwise deviant behavior.  For example, Christina Meyers and her colleagues have reported the case of a man they name J.Z., who had a brain tumor removed in 1962 at the age of 33, which damaged his left orbital frontal lobe.  He suffered a change of personality like that of Elliot: he seemed to show something similar to psychopathic personality or antisocial personality disorder (Meyers et al. 1992).

Before his surgery in 1962, J.Z. was a stable and reliable husband, father, and worker.  He had worked at the same clothing store for many years.  After the surgery, his behavior at work and at home became disordered and disruptive.  He lost his job, and he never again had steady employment.  He lost most of his family's savings in wild business deals.  His wife divorced him.  When he reported for a neuropsychological evaluation at Baylor College of Medicine in 1987, he was 58 years old, unemployed, and living with his mother.

Speaking during his evaluation, J.Z. "freely reported being involved in criminal activities and said he had three billion dollars hidden away in West Germany" (Meyers et al. 1992, 123).  But to me this sounds so ridiculously boastful that his talk about "criminal activities" sounds dubious.

Meyers and her colleagues decided that his personality showed the traits of "antisocial personality disorder."  According to the American Psychiatric Association's Diagnostic and Statistical Manual-III-R of 1987, an adult can be identified with this disorder if he has at least four of the following 10 traits: (1) lack of consistent work behavior; (2) non-conformance to usual social norms; (3) tendency to be irritable and aggressive, (4) repeated problems honoring financial obligations, (5) failure to plan ahead or impulsive behavior, (6) untruthfulness, (7) recklessness with regard to personal safety; (8) inability to function as a responsible parent; (9) inability to sustain a monogamous relationship for more than one year; and (10) lack of remorse.  Meyers and her team saw at least 5 of these traits in J.Z.--1, 4, 5, 6, and 10.

J.Z. did not, however, satisfy one crucial criterion for this antisocial personality disorder: this disorder had not started in his childhood.  So like Damasio, Meyers calls this acquired antisocial personality disorder, as distinguished from developmental antisocial personality disorder.

Robert Blair and Lisa Cipolotti (2000) have reported a similar case of acquired antisocial personality disorder after frontal lobe damage.  J.S. was a 56-year-old man in 1996 when he suffered trauma to the right frontal region of his brain.  A CT brain scan showed damage to the right orbitofrontal cortex and to the left amygdala.  Prior to this, he had worked successfully as an electrical engineer.  He was known as a quiet man who was never aggressive.  But after the brain injury, he became aggressively violent.  When he was in a rehabilitation hospital, he assaulted the nurses.  He frequently damaged property.  Like J.Z., J.S. satisfied some of the criteria in the DSM for antisocial personality disorder.


THE MATRICIDAL DAUGHTER

Charles Whitman murdered his mother.  The killing of a mother by her child is rare.  But when it does happen, the killer is almost always a son rather than a daughter.  So the story of the woman in Chile with adult-onset PFC damage who murdered her mother is surprising.

When Gricel Orellana and her colleagues first saw this woman at a hospital in Chile in 2009, she was 64 years old, and she had recently tried to kill a relative by poisoning her and then attempting to drown her in a bathtub.  This woman had had auditory hallucinations that God was commanding her to murder her relative.  As recommended by a forensic psychiatrist, a court declared her "not guilty by reason of insanity," and remanded her to psychiatric care (Orellana et al. 2013).

Amazingly, the court had made the same ruling--not guilty by reason of insanity--only two years earlier when she had murdered her mother.  She had tried unsuccessfully to strangle her mother with a scarf, and then the next day she drowned her in a bathtub.  She had followed her religious hallucinations telling her to kill her mother as a sacrifice to God.

This woman's shocking behavior had begun in 1985, when she was 40 years old, after she had had surgery to remove nasal polyps, and the surgery damaged her right ventromedial prefrontal cortex.  Before 1985, her life was normal, and she showed no unusual behavior, although she fought with her mother constantly.  After the surgery, her personality changed radically.  Her behavior became so disruptive that she could not maintain any stable social relationships.  She could not keep any regular jobs.

In 1993, she developed visual and auditory hallucinations with religious messages, which included God's command to kill her mother.  A psychiatrist diagnosed her as suffering from paranoid schizophrenia.

In 2009, a MRI of her brain confirmed that she had damage to the right ventromedial prefrontal cortex, which apparently had come from her 1985 surgery.  Like some of the other PFC lesion patients, this could have caused her "acquired psychopathy."

Orellana and her colleagues administered the same tests that Damasio had used with Elliot, including the Iowa Gambling Task, and they found the same evidence of poor decision-making and emotional flatness that is characteristic of psychopaths.


LESION NETWORK LOCALIZATION OF CRIMINAL BEHAVIOR

These 6 cases of frontal lobe damage followed by personality changes that resemble antisocial personality disorder are included in the 17 brain lesion cases associated with criminal behavior studied by Ryan Darby and his colleagues (Darby et al. 2018).  Although the most common lesion location was the vmPFC/orbitofrontal cortex, in at least seven of these 17 cases, the brain damage did not extend into these areas.  Three of the lesions were in the medial temporal lobe and amygdala, three in the anterior lateral temporal lobe, one in the dorsomedial prefrontal cortex, and one in the ventral striatum.

Darby and his colleagues suspected that the behavioral impairments caused by these lesions resulted not so much from damage to any one particular region itself but from the disruption of the connections between brain regions.  We can explain criminality as caused by some impairment of the brain's normal capacity for moral judgment.  The neuroscientific study of morality has shown that the neural basis for moral judgment cannot be located in any one area of the brain, because the "moral brain" is actually a large functional network connecting many different areas of the brain (Fumagalli and Priori 2006; Greene and Young 2020; Mendez 2009; Young and Dungan 2012).  As expected, Darby's group was able to show that all of the lesions were functionally connected to the same network of brain regions for moral judgment--including regions involved in morality, value-based decision making, and theory of mind.


UNNATURAL FREE WILL, NATURAL DETERMINISM, AND NATURAL FREEDOM

So what does this teach us about whether we can hold people morally and legal responsible for their criminal behavior?  As I have indicated in a previous post, there are at least three possibilities.  First, we might argue that no matter what the biological science of natural causality claims, we have a "free will" to exercise a supernatural or immaterial freedom of will that is an uncaused cause of our thinking and acting, and without this, we have no grounds for holding people responsible for their behavior.

Second, we might argue that neuroscience and the other biological sciences show that all human thinking and acting is determined by natural biological causes; and so "free will" is illusory, and we cannot hold people responsible for what they do, because they had no choice.

The third possibility is somewhere in between these two extremes.  How we think and act is not compelled by natural causes.  But neither can we exercise "free will" understood as some spiritual or immaterial power that is an uncaused cause acting outside and beyond natural causality.  We have the power to act as we choose regardless of the cause of the choice.  The fact that all of our behavior is caused does not mean that it is compelled.  When we freely choose to think or act, what we do has been caused by our beliefs and desires, but this causation is not compulsion, and so we can be held legally or morally responsible for this.

Sometimes people are compelled by biological causes to behave in ways that they have not freely chosen.  So we might agree that the woman in Chile who killed her mother suffered from some form of paranoid psychosis: she heard the voice of God commanding her to kill her mother.  We might agree that she was innocent by reason of insanity, because she could not distinguish right from wrong.

But most of us most of the time have enough freedom of choice that we can be held responsible for our behavior.  Even most of those people with psychopathic brains do not become criminals.  Gage and Elliot might have had "acquired psychopathy" because of their frontal lobe damage, but they did not become violent criminals.


REFERENCES

Anderson, Steven W., Antoine Bechara, Hanna Damasio, Daniel Tranel, and Antonio Damasio. 1999. "Impairment of Social and Moral Behavior Related to Early Damage in Human Prefrontal Cortex." Nature Neuroscience 2: 1032-1037.

Blair, Robert J. R., and Lisa Cipolotti. 2000. "Impaired Social Response Reversal: A Case of 'Acquired Sociopathy.'" Brain 123: 1122-1141.

Damasio, Antonio. 1994. Descartes' Error: Emotion, Reason, and the Human Brain. New York: G. P. Putnam's Sons.

Darby, R. Ryan, Andreas Horn, Fiery Cushman, and Michael D. Fox. 2018. "Lesion Network Localization of Criminal Behavior." Proceedings of the National Academy of Sciences 115: 601-606.

Fumagalli, Manuela, and Alberto Priori. 2012. "Functional and Clinical Neuroanatomy of Morality." Brain 135: 2006-2021.

Greene, Joshua, and Lianne Young. 2020. "The Cognitive Neuroscience of Moral Judgment and Decision-Making." In David Poeppel, George Mangun, and Michael Gazzaniga, eds., The Cognitive Neurosciences, 1003-1013. Cambridge: MIT Press.

Mendez, Mario F. 2009. "The Neurobiology of Moral Behavior." CNS Spectr. 14: 608-620.

Meyers, Christiana, Stephen Berman, Randall Scheibel, and Anne Hayman. 1992. "Case Report: Acquired Antisocial Personality Disorder Associated with Unilateral Left Orbital Frontal Lobe Damage."  Journal of Psychiatric Neuroscience 17: 121-25.

Orellana, Gricel, Luis Alvarado, Carlos Munoz-Neira, Rodrigo Avila, Mario Mendez, and Andrea Slachevsky.  2013. "Psychosis-Related Matricide Associated with a Lesion of the Ventromedial Prefrontal Cortex." Journal of the American Academy of Psychiatry and Law 41: 401-406,

Young, Liane, and James Dungan. 2012. "Where in the Brain Is Morality?  Everywhere and Maybe Nowhere." Social Neuroscience 7: 1-10.


Sunday, March 28, 2021

Phineas Gage and Damasio's Search for Moral Judgment in the Brain

 

                                                                  The Phineas Gage Story



                 A Daguerreotype of Phineas Gage, After His Recovery from the Accident


In 1994, I was fascinated by Antonio Damasio's account of how Phineas Gage had an iron bar blown through his brain, and how the damage to one region of his brain--the ventromedial prefrontal cortex (vmPFC)--turned him into something like a psychopath with no moral sense, showing the same changes in moral personality that Damasio had seen in his patients with damage to the frontal lobes of the brain. These patients had suffered no decline in their intellectual capacities for speaking and abstract reasoning, but they lacked the capacity for practical reasoning about how to act in socially appropriate and personally advantageous ways, because they lacked the moral emotions to motivate them in planning their lives.  What this showed was that there was no sharp separation between reason and emotion, because emotion was part of good practical reasoning (Damasio et al. 1994; Damasio 1994). 

Damasio's work had a crucial influence on my writing of Darwinian Natural Right in 1998, in which I argued that the biological ethics of human nature required a complex interaction of reason and emotion, which confirmed the rational emotivism of Aristotle, David Hume, and Adam Smith, and refuted the rationalist ethics of Immanuel Kant.  (I have written about Damasio's Spinozist neuroscience here and here.)

But then sometime around 2002, I read Malcolm Macmillan's An Odd Kind of Fame: Stories of Phineas Gage, and I was persuaded by him that Damasio's account of Gage was distorted by Damasio's failure to see that Gage's mental and moral decline was only temporary.  In recent weeks, I have been thinking more about this, and now I see that Damasio was at least partially correct about what Gage teaches us, although Damasio was mistaken in assuming that the damage to Gage was permanent and irreversible.  The plasticity of the brain allows for some limited recovery even from brain injuries as severe as that suffered by Gage.  Moreover, recent research in the neuroscience of morality suggests that while the brain does support moral judgment, there is no specifically moral center of the brain, but there is a complex neural network of brain regions that sustains moral experience.

On September 13, 1848, Gage was 25 years old, and he was the foreman over a work crew working for the Rutland and Burlington Railroad south of Cavendish, Vermont.  Their job was to prepare a flat roadbed for laying track by blasting through the rocky hills.  To do that, they had to bore deep holes in the rock, put blasting powder and a fuse in the hole, and then add sand and clay in the hole so that the blast's energy would be directed into the rock.  Gage had to use a tamping rod to pack the sand and clay.  He had had a blacksmith make a special rod for him that was three feet seven inches long, 1 and 1/4 inches in diameter, and weighing over 13 pounds.  The end of the rod entering the hole was tapered down to a point 1/4 inch in diameter.

He was working on a hole filled with powder and a fuse.  He was distracted by something he heard from his men, and he turned his head over his right shoulder to speak to them.  At that point, he dropped his rod into the hole, and the rod rubbing against the rock created a spark that ignited the powder.  The explosion launched the rod like a missile, entering the left side of Gage's face, passing behind his left eye, into the left side of his brain, exiting the top of the skull through the frontal bone, then passing out of his brain and landing some 80 feet away smeared with blood and brain.

Gage was thrown onto his back.  But, amazingly, within a few minutes, Gage was speaking and walking around.  He was taken into town where two doctors--Edward Williams and John Harlow--treated him.  Harlow took charge of the case and cared for him over the next six months.  Harlow kept notes on the case, and most of what we know about Gage comes from Harlow's published reports about Gage in1848 and 1868.  A few other doctors--particularly Henry Bigelow, professor of surgery at Harvard University--also saw Gage and wrote about him.

Harlow concluded that the damage to Gage's brain had been primarily to the anterior and middle lobes of the left cerebral cortex, so that whatever function that part of the brain served must have been destroyed.  Apparently, this lost function had something to do with moral personality, because that was the change that Gage showed.  In his 1868 report, Harlow wrote:

". . . His contractors, who regarded him as the most efficient and capable foreman in their employ previous to his injury, considered the change in his mind so marked that they could not give him his place again.  The equilibrium or balance, so to speak, between his intellectual faculties and animal propensities seems to have been destroyed.  he is fitful, irreverent, indulging at times in the grossest profanity (which was not previously his custom), manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires, at times pertinaciously obstinate, yet capricious and vacillating, devising many plans of future operation, which are no sooner arranged than they are abandoned in turn for others appearing more feasible.  A child in his intellectual capacity and manifestations, he has the animal passions of a strong man.  Previous to his injury, though untrained in the schools, he possessed a well-balanced mind, and was looked upon by those who knew him as a shrewd, smart business man, very energetic and persistent in executing all his plans of operation.  In this regard, his mind was radically changed, so decidedly that his friends and acquaintances said he was 'no longer Gage'" (reprinted in Macmillan 2000, 414-15).

These brief comments by Harlow were responsible for making Gage's story the most famous case of brain injury--often mentioned in textbooks of psychology and neurology, along with pictures of Gage's skull--because this seemed to show that human personality--the human soul or spirit--is a biological product of particular areas of the human brain that can be lost when the brain is damaged: Gage was no longer Gage.

From Gage's mother, Harlow learned about Gage's subsequent life.  He travelled around New England exhibiting himself, along with his tamping iron, to audiences who paid to see him.  In New York City, he was an exhibit at P. T. Barnum's Museum.  Then, in 1851, he worked in a livery stable in Hanover, New Hampshire, for a year and a half.  In August, 1852, he was hired by a businessman who was setting up a line of coaches in Chile at Valparaiso; and Gage worked in caring for the horses and driving a coach between Valparaiso and Santiago for seven years.

Then, in 1859, Gage became ill, and he decided to leave Chile and travel to San Francisco to live with his mother who had moved there.  Gage worked briefly for a farmer in Santa Clara.  But he began to have severe epileptic convulsions that killed him in 1860.

For some years, Harlow had lost track of Gage until he began to write to Gage's mother in 1866.  She reported how he had died.  At Harlow's request, she and her family agreed to exhume Gage's body and detach his skull so that they could deliver it to Harlow for scientific study.  Harlow arranged to have the skull and Gage's tamping iron given to the Harvard Medical School, where they became the most famous items in the school's museum.

In 1994, Damasio and his colleagues took measurements from Gage's skull at Harvard and used modern neuroimaging techniques to reconstruct the probable path of the rod through Gage's brain and thus determine the exact location of the damage.  They inferred that the ventral and medial sectors of both left and right prefrontal cortices were damaged.  So the damage was not limited to the left side as Harlow had said.  

Damasio had seen the same damage in some of his patients at the University of Iowa Hospitals, who displayed the same personality changes shown by Gage.  Like Gage, there was no decline in their general intelligence, memory, or learning; but they had lost their capacity for planning and executing good moral decisions.  This suggested that the ventromedial prefrontal cortex was the one region of the brain crucial for moral judgment.  Damage to this part of the brain seem to cause what Damasio called "acquired sociopathy," because those with such brain lesions behaved like sociopaths or psychopaths, who lacked any conscience or moral sense.  They understood intellectually the difference between good and bad conduct, but they lacked the emotional motivation to choose the good.  They were condemned, Damasio observed, "to know but not to feel" what they ought to do.

But was this really true for Gage?  Damasio claimed that before the accident Gage had been "a responsible, intelligent, and socially well-adapted individual, a favorite with peers and elders."  Then, after the accident, he suffered "profound change in personality"--"Gage was no longer Gage": "he had become irreverent and capricious.  His respect for the social conventions by which he once abided had vanished.  His abundant profanity offended those around him.  Perhaps most troubling, he had taken leave of his sense of responsibility."  As a consequence, "Gage never returned to a fully independent existence, never again held a job comparable to the one he once had" (Damasio et al. 1994, 1102).

Against Damasio, however, Macmillan argues that the moral degeneration in Gage after the accident was only temporary, and that he did show at least a partial recovery.  The evidence for this is that he did eventually support himself with some steady jobs.  He worked at the livery stable for a year and a half, and he worked as a coach driver in Chile for seven years.  Being a successful coach driver on the route between Valparaiso and Santiago would have required practical and social skills in satisfying the needs of his passengers.  A few years ago, Macmillan found evidence that in 1860 a Dr. Henry Trevitt, who had lived in Valparaiso, reported "that he knew Gage well; that he lived in Chile, where he was in stage driving; and that he was in the enjoyment of good health, with no impairment whatever of his mental faculties" (Macmillan and Lena 2010, 648).

Neuroscientists today often identify the frontal lobes as serving the "executive functions" of the brain in providing a central supervisory system that organizes practical decision-making, so that damage to the frontal lobes produces a "dysexecutive syndrome" in which people cannot organize their practical lives in a rational manner--their behavior becomes erratic and impulsive.  As Macmillan observes, rehabilitation programs for people who have suffered severe frontal lobe damage often try to restore some of this executive management by providing a tight routine of external structure that teaches the patients to organize their thoughts and actions through a daily repetition of a step-by-step process to achieve specific goals.

Macmillan thinks Gage's job as a stagecoach driver in Chile provided him an informal version of this rehabilitation program.  Macmillan has found some newspaper reports of the daily coach run between Santiago and Valparaiso--leaving Valparaiso at 4 a.m. for a 100-mile, 12-13 hour journey to Santiago.  We can imagine that Gage would have had to rise well before 4 a.m. to feed, groom, and harness the horses.  He would then have had to load the passengers' luggage, collect their fares, and then provide for their needs throughout the day, while skillfully driving the horses over rugged and crowded roads.  Thus, his daily work was organized by a strict external structure.  And he must have been good at this if he was employed in this for seven years, perhaps by the same employer (Macmillan and Lena 2010, 645).

Macmillan recognizes this as the kind of rehabilitation that was developed by Aleksandr Romanovich Luria, a famous Soviet neurologist, for the rehabilitation of Red Army soldiers with frontal lobe brain injuries from World War II.  Luria believed that the frontal lobes of the brain allow us to use an internal language to plan and regulate our actions to achieve our goals: we talk ourselves through our day.  Patients with damage to the frontal lobes must learn how to do this.  Luria would have supervisors talk to their patients, telling them what to do step-by-step to achieve some simple goal.  Then the patients would be told to repeat these words to themselves as they moved through each step of the task.  This would be done over and over every day in exactly the same way, until finally the patients would develop a simple internal language of supervising their own behavior so that their lives would become highly structured.  Luria admitted, however, that complete success--particularly with massive frontal damage in both lobes--was almost never achieved.  Very few of his patients learned to live independently (Luria 1980, 246-365).

Macmillan sees a similar kind of rehabilitation program in some reports of how people with severe frontal lobe damage can learn to control their conduct when they are habituated by structured environments of behavioral conditioning.  For example, Thomsen, Waldemar, and Thomsen (1990) have related the 20-year case history of a young woman who at age 17 was involved in a car accident that killed both of her parents and left her with bilateral frontal lobe damage.  She regressed to a state of extreme childishness with grasping, sucking, and yawning movements.  She showed almost no emotion, and she could not establish any emotional contact with anyone.  She could not care for herself, and so she had to live in nursing homes for over 10 years.  Before the injury, she had completed 9 years of schooling, and she had had good relationships with her schoolmates and her teachers.  She had normal intelligence.  But those who knew her thought she was rather immature.

13 years after her injury, she was 30 years old, and she began living with a 45-year-old man, who cared for her, and without having any sexual relationship with her.  He patiently wrote out a program for how she should do the housework, which he read to her every day in exactly the same words.  He praised her when she did something well.  After a full year of this, she showed no improvement.  But by the second year, he had some success, in that she did the housework and the shopping without his assistance.  She was no longer restless.  She spoke kindly about her partner and his family.  But she remained childish in her mind and character.

Just like Luria's patients, people like this woman with massive frontal lobe damage can show some improvement in managing their life when they are guided by a strict program of behavioral conditioning, but the success is very limited, and they never recover the normal moral judgment that they had before their brain injury.

To me, that confirms Damasio's conclusion about people like Gage--that without the normal functioning of the ventromedial prefrontal lobes, human beings lose their capacity for good moral character.  Even Macmillan seems to concede that when he writes:

"Phineas Gage made a surprisingly good psycho-social adaptation: he worked and supported himself throughout his post-accident life; his work as a stagecoach driver was in a highly structured environment in which clear sequences of tasks were required of him; within that environment contingencies requiring foresight and planning arose daily; and medical evidence points to his being mentally unimpaired not later than the last years of his life.  Although that Phineas may not have been the Gage he once had been, he seems to have come much closer to being so than is commonly believe" (Macmillan and Lena 2010, 655).

"Phineas may not have been the Gage he once had been."  So Gage was no longer Gage?

I assume, however, that Macmillan would want to insist that Damasio is still wrong in claiming--like the phrenologists--that moral judgment resides in one specific part of the brain.  Now Damasio does indicate his partial agreement with the phrenologists of the 19th century.  He agrees with Franz Joseph Gall's claim that the brain is the organ of the spirit.  He also agrees with the phrenologists in that "brain specialization is now a well-confirmed fact."  But he disagrees with the claim that each function of the brain depends on a single "center" that is independent of the other parts of the brain.  Instead of that, he sees that each mental function--such as vision, language, or morality--arises from systems of interconnected brain regions.  So while the ventromedial prefrontal cortices are important, perhaps even necessary, for moral judgment, the execution of this function depends on a collection of systems in which many parts of the brain must be properly connected (Damasio 1994, 14-17, 70-73).  So Sandra Blakeslee (1994) was mistaken in her article on Damasio's research when she said that he had identified the "brain's moral center."

I will develop this point--that morality depends on the complex interaction of many different parts of the brain--in my next post.


REFERENCES

Blakeslee, Sandra. 1994. "Old Accident Points to Brain's Moral Center." New York Times, May 24.

Damasio, Antonio. 1994. Descartes' Error: Emotion, Reason, and the Human Brain. New York: G. P. Putnam's Sons.

Damasio, Hanna, Thomas Grabowski, Randall Frank, Albert Galaburda, and Antonio Damasio. 1994. "The Return of Phineas Gage: Clues About the Brain from the Skull of a Famous Patient." Science 264: 1102-1105.

Luria, Aleksandr Romanovich. 1980. Higher Cortical Functions in Man. Second Edition. Trans. Basil Haigh.  New York: Basic Books.

Macmillan, Malcolm. 2000. An Odd Kind of Fame: Stories of Phineas Gage. Cambridge: MIT Press.

Macmillan, Malcolm, and Matthew Lena. 2010. "Rehabilitating Phineas Gage." Neuropsychological Rehabilitation. 20: 641-658.

Thomsen, Inger Vibeke, Gunhild Waldemar, and Anne Marie Thomsen. 1990. "Late Psychosocial Improvement in a Case of Severe Head Injury with Bilateral Fronto-Orbital Lesions."  Neuropsychology 4: 1-11.

Wednesday, March 17, 2021

What Do People Really Do in a Realistic Trolley Dilemma Experiment?

You are walking along the tracks of a trolley in San Francisco.  You see a runaway trolley that will kill five people who have become somehow bound to the tracks.  You also see that there is a switch that will turn the trolley onto a side track, a spur, and thus save the lives of the five people.  Unfortunately, however, there is one person bound to the side track, and so if you throw the switch, he will be killed.  Should you throw the switch?

On another day, you are walking on a footbridge over the tracks.  You see another runaway trolley speeding toward five people bound to the track.  This time, there is no possibility of switching the trolley to a side track.  You could jump onto the track to try to stop it, but you are such a small person that you probably could not stop the trolley.  You notice that there's a big fat man on the bridge who is big enough to stop the train if you push him onto the track.  Should you push the fat man?

Oh, I know, this runaway trolley scenario sounds too cartoonish to be taken seriously.  But it does capture the moral dilemma that people can sometimes face--perhaps in war--when it seems that some people must die to save the lives of many more.  Although killing someone is usually wrong, there are circumstances in which killing is justifiable--such as killing in self-defense or in defense of the lives of others.

Of the hundreds of thousands of people all around the world who have participated in formal Trolley Dilemma surveys, most people (80% to 90% in some studies) would divert the trolley in the Switch Case, but most of them (around 75%) would not push the fat man in the Footbridge Case.  As reported by Paul Bloom (in Just Babies, 167-68), even three-year-old children presented with the trolley problem (using Lego people) will tend to say that throwing the switch is right, but pushing the man off the bridge is wrong. What is most striking about this is that most people react differently to the two cases although pulling the switch and pushing the fat man have identical consequences--one person dies to save five.  Why?

Joshua Greene thinks that if you scan the brains of people with fMRI while they are deciding this Trolley Dilemma, you will see the neural activity that explains why people decide this the way they do; and this will reveal the neural correlates of moral judgment.  Previously, I have written about the Trolley Dilemma (here and here) and about Greene (herehere, and here).  Recently, I have been thinking more about this as possibly showing how conscience and guilt arise in the brain.

But right now I am wondering whether what people say they would do in the hypothetical Trolley Dilemma situation shows us what they would actually do in a real Trolley Dilemma situation.

As far as I know, the first realistic Trolley Dilemma experiment was done by Michael Stevens for his "Mind Fields" YouTube video series here  It's about 35 minutes long.  This video is entitled "The Greater Good," and it was Episode 1 of the second season, which first appeared online December 6, 2017.

The experiment is clever.  Seven people were recruited to participate in a focus group for the "California Rail Authority" (a fictitious organization).  They think they will be asked questions about what they would like to see in high-speed rail transportation.  When they arrive for their meeting, at a CRA trailer on a hot day, each individual is told that there is going to be a 15 minute delay, and while they are waiting, they can sit in an air-conditioned remote railway switching station.

Inside the switching station, they meet a switchman who sits before a panel of screens with (apparently) live pictures of a remote railway switching location.  The switchman explains how he switches a train from track 1 to track 2.  A train is coming through, and he has the duped subject actually switch the train onto track 2.  Then the switchman receives a phone call, and he says he has to leave for a short time, but the subject should stay in the switching station.  What the subject does not know is that what he sees on the video screens is all prerecorded, and the switchman is an actor.  The subjects do not know that everything they do is being filmed by hidden cameras.

Sitting alone in the switching station, the subject sees railway workers walking onto the tracks, and hears a loud warning "objects on the tracks."  One worker walks onto track 2, and he appears to be distracted by taking a telephone call.  Five workers walk onto track 1, and they are wearing sound-proofing earphones.  All the workers have their backs turned to the train that is shown approaching, and the subject sees this train coming and the warning "a train is approaching" on track 1.  The workers do not seem to see or hear the approaching train.  So the subject must think that the 5 men on track 1 will be killed unless the train is switched onto track 2, so that 1 man will be killed.  Should the subject pull the switch?

You might think this is an unethical experiment.  Isn't it wrong to trick people into going through such a traumatic experience?  After consulting with psychologists, Stevens decided this would be an ethical experiment as long as they screened the people they recruited and excluded those who showed the personality traits that might predispose them to be excessively traumatized by the experiment.  He also designed the experiment to minimize the traumatic effects.

7 individuals went through the experiment.

The first was Elsa.  When she was left alone in the switching station, and she saw the train approaching the men on the tracks, she became visibly disturbed as she intently stared at the screens.  Just a few seconds before the train would have hit the 5 men on track 1, she switched the train onto track 2, expecting that the one man would then be killed.  But as soon as she pulled the switch, the screens went blank, and this message was flashed on the screen:  "End of test.  Everyone is safe."  She did not actually see the man on track 1 being hit.  And immediately Stevens and a psychologist entered the building and told her that this was all an experiment, and all these people were actors.

They asked Elsa about what she was feeling and thinking.  She was frightened by what she saw on the screens.  "Their lives are in my hands," she thought.  "I must save more lives."  "I didn't know if I made the right decision. . . . But a life is a life."

When people are asked to make a decision about flipping the switch in the hypothetical Trolley Dilemma, most of them agree with Elsa and decide to kill one person to save five.  Surprisingly, however, in this experiment, in which people thought they were really deciding life or death, most of the participants--5 out of 7--refused to flip the switch.  The five people that came after Elsa all froze in fear as they watched the train approaching the men, and so these five refused to flip the switch.

Afterwards, each of the 5 said that they felt terrified, and they froze up so that they could not move their hands to the switch.  "I thought about it, but I couldn't do it," one said.  They all suggested that they had fleeting thoughts that surely the men would somehow escape from being killed.  Surely the workers would notice the train and jump away.  Or maybe the train had sensors that would stop it to avoid hitting them.  Or maybe there are other railway people watching over this scene who can stop the train.  They were looking for some way to escape the tragic inevitability of the choice between one dying and five dying.  One person said: "I didn't know who should live, who should die. . . . Switching or not, someone would be hurt."

The last of 7 in the experiment was Cory, and like Elsa, he flipped the switch.  But we see and hear him talking to himself: "Oh no. . . . They should see this. . . . Oh my God!"

We see the terror in Cory's face, and afterwards he expresses how horrified he felt in taking responsibility for his decision.  "5 people versus 1," he says.

Cory burst into tears once Stevens and the psychologist came into the building.  He was clearly distraught by what he had done.

I know we probably shouldn't draw conclusions from an experiment involving only 7 individuals, but I will go ahead anyway and draw three conclusions.  First, this experiment suggests that for most people, what we think we would do in a moral dilemma like this differs from what we would actually do.  Most of us decide the hypothetical Trolley Dilemma by saying that we would flip the switch, and thus endorse the utilitarian calculation that taking one life to save five is the "greater good."  But if we were in a real tragic situation like this--or one that we thought was real--most of us would freeze, refusing to kill the one man to save five.

If this is so, how do we explain it?  Was there something about Stevens' experiment that made the flipping of the switch feel like a "personal" harming of the one man on track 2--making it feel more like the pushing of a fat man off the footbridge into the path of the train?  If Stevens could have scanned the brains of these 7 people during the experiment, would he have seen that the more emotional parts of the brain (amygdala, posterior cingulate cortex, and medial prefrontal cortex) were active, countering the more calculating parts of the brain (the dorsolateral prefrontal cortex and the inferior parietal lobe), which is what Joshua Greene and his colleagues in 2001 found in the brains of those deciding the Footbridge Dilemma?

Was there something about the circumstances of Stevens' experiment that made at least five of the seven people feel that flipping the switch would violate the instinctive moral principle of double effect because they would be directly targeting the man on track 2 for death?

My second conclusion is that in a tragic dilemma like this, there is no right choice.  That's what makes it tragic!  So both those who pull the switch and those who don't can feel justified.

Here I seem to disagree with Stevens.  In his closing comments, he asks the question, Is it wrong to freeze?  And he implies that the answer is yes, because he says we can learn to overcome our propensity to freeze to serve the "greater good."  Both he and the psychologist appear to praise Elsa and Cory for having made the right choice.  But I don't see that.  And in the meeting of all the participants at the end, they share their experiences in a way that suggests that those who froze need not feel ashamed of their decision not to throw the switch.

My final conclusion from this experiment is how is shows both reason and emotion in moral judgment.  These 7 individuals had only a couple of minutes to decide the shocking moral dilemma they faced.  But in that short time, they all showed deep emotional disturbance, and they all engaged in some reasoning.  They all assessed the situation emotionally and rationally before making their decision.  This teaches us a general truth about human nature--the moral judgment is both rational and emotional.  But it also teaches us about the individuality of moral decisions in which different people will come to different decisions where there is no clearly right answer.

Sunday, March 14, 2021

The Neural Correlates of First-Party Punishment: Conscience and Guilt

 "Evolution built us to punish cheaters."  That's how Judge Morris Hoffman began his brilliant book The Punisher's Brain: The Evolution of Judge and Jury (2014).  In that book, Hoffman laid out the evidence and reasoning for the claim that social life and human civilization generally depend on our evolved instinct to punish ourselves and others when we or other people violate those social norms of cooperation that sustain any social order.  

I have written about Hoffman's book in a previous post.  I have suggested that what Hoffman says about the natural instinct for punishing those who disobey social norms corresponds to what John Locke identifies as the natural right of all people to punish those who transgress the law of nature, which is that "no one ought to harm another in his Life, Health, Liberty, or Possessions" (Second Treatise, 6).  This Lockean law of nature corresponds to what Hoffman calls the three rules of right and wrong rooted in our evolved human nature to secure property and promises.  Rule 1: Transfers of property must be voluntary. Rule 2: Promises must be kept.  Rule 3: Serious violations of Rules 1 and 2 must be punished.  Like Locke, Hoffman interprets "property" in a broad sense as starting with self-ownership and encompassing one's life, health, and possessions, as well as the life, health, and possessions of one's family and others to whom one is attached.  Understood in this broad way, Rule 1 embraces criminal law and tort law, while Rule 2 embraces contract law.

Hoffman and Locke also agree in identifying three levels of natural punishment.  Through self-punishment or first-party punishment, we punish ourselves through conscience and guilt.  Through second-party punishment, we punish those who harm us by immediately retaliating against them or by later taking revenge against them.  Through third-party punishment, we punish those who have harmed other people.

If Hoffman is right in claiming that biological evolution has built our brains to express this kind of punishment, then we should expect to see neural correlates for punishment at all three levels.  Hoffman surveys the neuroscientific evidence for this.

In this post, I am beginning a series of posts reviewing this evidence, some of which was mentioned by Hoffman, but also some that has emerged over the past eight years.

Let's start with self-punishment.  We punish ourselves by blaming ourselves for our misconduct, which is expressed through feelings of conscience and guilt.  In feeling guilt, we blame ourselves for our past misconduct: we recognize that we have wrongly harmed others, and that they can rightly punish us.  In feeling a conscience, we imagine blaming ourselves for some future misconduct, and this gnaw of conscience can motivate us to refrain from that misconduct.

From Charles Darwin to Edward Westermarck to Jonathan Haidt, evolutionary psychologists have explained these feelings of conscience and guilt as instinctive evolutionary adaptations for human beings as social animals who need to enforce the social norms of cooperation by punishing themselves for cheating.  If this is true, then we should see evidence for these evolutionary adaptations in the human brain.

To search for such evidence, we need to somehow see the mind thinking in the brain.  That became possible for the first time in the late 1970s with the invention of the positron camera and positron emission tomography (PET scan).  Like magnetic resonance imaging (MRI), the PET scan depends on a fundamental postulate--"neurovascular coupling"--first proposed by neurologist Charles Sherrington in 1890: the most active parts of the brain will show an increase in blood flow in the vessels supplying them, because greater neural firing requires greater energy provided by the oxygen and glucose in the increased blood flow.  This postulate was confirmed in the 1950s by the neurosurgeon Wilder Penfield: while operating on people with severe epilepsy, he would wake them up during the surgery, ask them to move their fingers, and he could see changes in color from an influx of blood to regions of the brain active in motor control.  In the 1970s, neuroscientists David Ingvar and Niels Lassen developed a brain imaging method, by which a radioactive gas was injected into the carotid artery, so that a scintillation camera at the side of the subject's head could record the circulation of blood, which became the first functional imaging of the brain at work (Le Bihan 2015).

The PET scan also depends on radioactivity.  Water that has been made radioactive in a cyclotron is injected in a vein of the arm.  When the oxygen nucleus of water has been rendered radioactive, it ejects a positron (a positively charged electron) for a few minutes.  When the radioactive water reaches the brain, the positron camera can record the higher quantity of positrons in those regions of the brain with increased blood flow.

In 2000, experimenters used PET scanning for the first neuroimaging study of guilt (Shin et al. 2000).  They tested eight male participants for their experience of guilt.  The participants were asked to write descriptions of two kinds of past personal events--one emotionally charged event that make them feel the most guilt they had ever experienced and two other events that created no deep emotion.  These descriptions were then modified so that they were written in the second person and in the present tense.  These scripts were read and tape-recorded in a neutral male voice for playback in the PET scanner.  They were asked to listen carefully to the scripts and imagine the event as vividly as possible.  After coming out of the scanner, they were asked to rate the intensity of their emotional states during the readings of the guilt and neutral scripts on a scale from 1 to 10.

Their average subjective rating for guilt was 8.8 for the guilt script and 0 for the neutral script.  For shame, the average was 7.4 for the guilt script and 0 for the neutral script.  For disgust, the average was 6.5 for the guilt script and 0 for the neutral script.

As compared with the neutral script, the PET scans showed increased blood flow to three areas of the paralimbic regions of the brain: the anterior (front) temporal poles, the anterior cingulate gyrus, and the anterior insular cortex/inferior frontal gyrus.

The paralimbic cortex surrounds the middle and lower parts of the brain's two hemispheres.  It is a network of brain structures associated with emotional processing, goal setting, motivation, and self-control.  This PET scanning study suggests that some of the neural circuitry in this paralimbic network supports the human experience of guilt by which we punish ourselves for violating social norms.  And once we have learned how guilty we feel from our past misconduct, we will feel the pangs of conscience when we contemplate some similar misconduct in the future.

We should recognize, however, that neuroimaging studies like this have some serious limitations.  First, the sample size is small (only eight individuals).  Second, it relies on self-reporting, so that we must trust that these eight people honestly and accurately reported their experience of guilt.  Third, the spatial resolution of PET is limited in ways that can create errors in neuroanatomical localization.

Neuroimaging with MRI produces clearer, more precise images.  While PET relies on the oxygen nucleus of the water molecule and radioactivity, MRI relies on the hydrogen nucleus of water and magnetism.  MRI uses a large magnet to create an intense magnetic field that is strong enough to magnetize the proton in the nucleus of the hydrogen atom in the water molecule.  The MRI scanner uses radio waves to excite the protons so that they emit radio wave signals that can be detected by the scanner.

The hemoglobin of red blood cells attaches to oxygen in the lungs and transports the oxygen to the organs of the body through the arteries.  Hemoglobin contains an atom of iron that is magnetized.  As long as it is attached to oxygen, it is "diamagnetic"--it is repelled by a magnetic field.  When the hemoglobin has released the oxygen, the hemoglobin becomes "paramagnetic"--it is attracted by a magnetic field and becomes like a little magnet.

In the blood vessels of the brain, some of the oxygen is released, and the red blood cells are enriched with deoxygenated hemoglobin and so magnetized.  These magnetized red blood cells change the local magnetic field and disturb the magnetization of water molecules around them, which lowers the MRI signal sent by the hydrogen protons.  A computer analysis of these changing signals can then generate images showing the most active parts of the brain as indicated by variation of oxygenation of the blood.

In 1992, researchers in four groups in the United States showed how this could be done so that subjects in a MRI scanner would have visual images presented to them, and the scanner could then generate images of neural activity in the back of their brains in their primary visual fields (Ogawa et al. 1992).  This was the beginning of functional brain imaging by MRI (fMRI), using the method called BOLD (blood oxygen level dependent).



An fMRI Image of Human Neural Activity in the Primary Visual Fields at the Back of the Brain (the Occipital Lobe)

One systematic review of 16 fMRI studies of guilt found three kinds of methods for measuring guilt (Gifuni, Kendal, and Jollant 2017).  One method was to have subjects read a script involving guilt and then evaluate the emotion evoked by the script.  A second was to ask subjects to relive a guilt-causing event from their past or to imagine themselves in a imaginary guilt-causing event.  A third method was to put subjects in a social situation that might elicit guilt, such as playing economic behavioral games or other kinds of interpersonal games.  The MRI scanner would then identify the areas of their brains that were most active during their experience of guilt.

This review identified a distributed network of brain regions involved in processing guilt.  There were 12 clusters of brain activation located in the prefrontal, temporal, and parietal regions, mostly in the left hemisphere.  "Together, these interconnected regions have been associated with a wide variety of functions pertaining to guilt, including self-awareness, theory of mind, conceptual knowledge, moral values, conflict monitoring and feelings of moral disgust" (Gifuni, Kendal, and Jollant 2017, 1174).

In general, brain scanning studies have shown that moral experience elicits greater activity in brain regions for emotional processing, social cognition (including reading the minds of others), and abstract reasoning about the past and future.  These regions include the ventromedial and dorsolateral prefrontal cortex, the amygdala, superior temporal sulcus, bilateral temporoparietal junction, posterior cingulate cortex, and precuneus.  In other words, "many brain areas make important contributions to moral judgments although none is devoted specifically to it" (Greene and Haidt 2002, 517).  

Where in the brain is morality?  The answer seems to be: Everywhere and nowhere (Young and Dungan 2012).  There is no specifically moral organ or moral brain set apart from the rest of the brain: in a sense, the moral brain is the whole brain, because human morality depends on "the brain's general-purpose machinery for representing value, applying cognitive control, mentalizing, reasoning, imagining, and reading social cues" (Greene and Young 2020, 1009).

This indicates that the Kantian philosophers are wrong in assuming that morality is an autonomous human activity of pure practical reason belonging to a realm of freedom that transcends the realm of nature, including the human nature of the human body and brain.

I will say more about this in future posts. 


REFERENCES

Greene, Joshua, and Jonathan Haidt. 2002. "How (and Where) Does Moral Judgment Work?" Trends in Cognitive Sciences 6: 517-523.

Greene, Joshua, and Liane Young. 2020. "The Cognitive Neuroscience of Moral Judgment and Decision-Making." In David Poeppel, George R. Mangun, and Michael S. Gazzaniga, eds., The Cognitive Neurosciences, 1003-1013. 6th edition. Cambridge: MIT Press.

Gifuni, Anthony J., Adam Kendal, and Fabrice Joliant. 2017. "Neural Mapping of Guilt: A Quantitative Meta-Analysis of Functional Imaging Studies."  Brain Imaging and Behavior 11: 1164-1178.

Hoffman, Morris. 2014. The Punishing Brain: The Evolution of Judge and Jury. Cambridge: Cambridge University Press.

Le Bihan, Denis. 2015. Looking Inside the Brain: The Power of Neuroimaging. Princeton, NJ: Princeton University Press.

Young, Liane, and James Dugan. 2012.  "Where in the Brain is Morality? Everywhere and Maybe Nowhere."  Social Neuroscience 7: 1-10.

Saturday, February 27, 2021

The Debate over Transgender Equality: Does Darwinian Natural Right Provide Any Standard?

 I have argued that the desire for sexual identity is one of the 20 natural desires of our evolved human nature.  If so, does that provide any natural standard for adjudicating the debate over transgender equality?  Most human beings easily identify their biological sexual identity as male or female.  But in a few cases, men desire to identify themselves as women, or women desire to identify themselves as men.  Does this show that gender identity is a social construction or mental experience that has nothing to do with the biological sex of the body?  Must we respect this gender identity by treating transgender women as women and transgender men as men?

So, for example, must we allow Caster Semenya to compete in the Olympics as a woman, even though she is a chromosomal male (XY) with the high testosterone levels typical for men, which gives her an advantage over other women in running events?  If we allow this, this would probably mean that chromosomal women (XX) will almost never win in Olympic running events competing with transgender or intersex women like Semenya.

Under present rules, Semenya will be prohibited from defending her 800m gold medal (won at the 2016 Summer Olympics in Rio de Janeiro), unless she takes drugs to reduce her testosterone to typically female levels.  Two days ago, her lawyers announced that they will be appealing to the European Court of Human Rights to ask that they overturn this rule as a violation of her human rights.

In a previous post, I have written about the controversy over Semenya, arguing that the science of sexuality confirms what we should know by common-sense experience that both the norm of sexual duality and the few exceptions to that norm are biologically real, and not just arbitrary social constructions.  We can respect Caster Semenya as an intersex individual with a partially male body and a female gender identity.  But we must also respect those many typically female athletes who want to compete with other biological females, so that they have a chance to display their athletic excellence.  To resolve this conflict of interests, the Olympic committees have properly ruled that intersex individuals like Semenya must either compete with men or reduce their testosterone levels to compete with women.

A similar dispute has arisen in the debate over the Equality Act, just passed by the House of Representatives, which would amend the Civil Rights Act of 1964 to prohibit discrimination on the basis of both sexual orientation and gender identity.  By passing this legislation, the Congress would endorse the interpretation of the Civil Rights Act adopted by the Supreme Court last summer in the Bostock case, where Neil Gorsuch (a Trump appointee) wrote the majority opinion that rejected the Trump Administration's claim that the sexual equality of men and women under the Civil Rights Act does not include the equal treatment of gays and transgender people.  I have written about this here and here.

President Biden has nominated Dr. Rachel Levine to be assistant secretary of health, who would be the first openly transgender federal official confirmed by the Senate.  In the first confirmation hearing, Senator Rand Paul questioned her about whether she supported genital mutilation, gender reassignment surgery, and hormone therapy for children.  Levine was evasive, replying that, "Transgender medicine is a very complex and nuanced field with robust research and standards of care."

Apparently, Paul's questioning was motivated by reports that Levine has endorsed a research study concluding that doctors should provide drug therapy for transgender adolescents to suppress puberty, because this would improve their mental health by bringing the sexual traits of their bodies into harmony with their transgender identity.  The study was published in the journal Pediatrics--"Pubertal Suppression for Transgender Youth and Risk of Suicidal Ideation," 145 (February 2020): e20191725.

Transgender people are known to suffer from high rates of suicidal thoughts--with about 40% having attempted suicide.  The claim of this new study is that pubertal suppression for transgender children would reduce this propensity to suicide, which would show better mental health for them.  This would be done by doctors prescribing gonadotropin-releasing hormone analogues that suppress puberty for adolescents 9 to 16 years old.

The method in this study was to conduct a survey of 20,619 transgender adults aged 18 to 36 years old, who were asked about their history.  Of this group, 3,494 individuals reported that they had at some time wanted to have puberty suppression therapy, but only 89 of these individuals reported that they had actually had the treatment.  These individuals were also asked about whether they had ever had suicidal thoughts or whether they had attempted suicide.  The researchers were able to show that those who reported receiving the puberty suppression treatment had a lower rate of suicidal thoughts and attempts than those who wanted the treatment but did not receive it.  The researchers concluded: "pubertal suppression for transgender adolescents who want this treatment is associated with favorable mental health outcomes."  And this is the conclusion that has been widely reported in the news media.

But if you read this study carefully, you will see some dubious features of their reasoning.  First of all, there's the fundamental methodological problem with all such survey research: they rely totally on the self-reporting of the people surveyed, although it is well known that people are often not accurate or honest in reporting what they have done or believed (see George Beam, The Problem with Survey Research, Transaction Publishers, 2014).

For instance, when some individuals report that they have had pubertal suppression treatment, and consequently they became less suicidal than they would been without the treatment, how do we know that this is true?  The research collected no medical records or observational data to confirm this self-reporting.  People who would report that they remained suicidal even after having the treatment would be confessing that their beliefs and behavior were mistaken.  Is it possible that they might be reluctant to admit that (even to themselves)?

Moreover, if you look at the actual raw numbers, they do not provide strong support for the conclusions of the article.  Only 16.9% of the people surveyed reported that they ever wanted pubertal suppression, so it seems that the great majority of transgender people do not believe this treatment would be good for them.  And those who actually received the treatment were only 0.43% of the total!

Of those who reported wanting to have the treatment, only 89 individuals (2.5%) reported that they had actually had the treatment.  And of those 89, 67 (75.3%) reported that they had thought about suicide, while 37 (41.6 %) reported attempting suicide.  41.6% attempting suicide is close to the 40% of transgender people generally who attempt suicide.  It's hard to see that the treatment has done them much good.

Now, it is true that those who reported wanting the treatment but not having it have a higher rate of suicidality than those who wanted it and had it.  But the differences are not very dramatic.  Of the first group, 90.2% report suicidal thoughts, and 51.2% reported attempts at suicide--as compared with 75.5% and 41.5%, respectively, for the second group.

The most striking finding in these numbers is that regardless of whether transgender people report having puberty suppression or not, transgender people as a whole suffer depressingly high rates of suicidal behavior, much higher than for the general population.

There's another problem here.  Those who had puberty suppression treatment between the ages of 9 and 16 were minors.  That means that their parents agreed to this.  Did the parents force or coax their children into doing this?  Weren't these children too young to make a mature decision of this sort on their own?

Why shouldn't the law prohibit children and parents from making this decision until the children are young adults with enough moral and intellectual maturity to make this decision for themselves?

A popular explanation of transgender people is that they feel "trapped in the wrong body."  These people are said to have a personal gender identity that conflicts with the biological sexual identity of their bodies, and so they might seek hormonal and surgical treatment to change the sexual identity of their bodies to conform to their gender identity.  It has now become a standard medical treatment for "gender dysphoria" to provide procedures for changing one's sex to coincide with one's preferred gender, so that men can become women, and women can become men.

I agree with Melissa Moschella (a philosopher at Catholic University of America) who has argued against this "wrong body" narrative as based on a false dualist view of human nature that should be rejected in favor of an "animalist" view. Dualism is false because the body is intrinsic to our personal identity. Animalism rightly sees that our personal sexual identity as male or female is rooted in the biological identity of our body.  We are our animal bodies.  And therefore men are those with male bodies, and women are those with female bodies.  It's impossible to change one's biological sex, because biological sex is impressed on our body and brain early in fetal and neonatal life.

Gender dysphoria, she has argued, is not a medical problem requiring physiological or surgical treatment but a psychological problem requiring psychiatric counseling.  Those with gender dysphoria are suffering from the mental delusion that their mental sexual identity differs from the sexual identity of their body.  So doctors who try to help them undergo a biological sex change are not curing them but rather supporting their delusional disorder.  This explains why although patients often feel better initially after sex-change procedures, later on they often become depressed, anxious, and suicidal.

Moschella makes a comparison with anorexia. The cure for anorexia is not liposuction, although the patient might initially feel better, because the problem is not obesity but the psychological delusion of being too obese.

Moschella has also pointed out the incoherence in the "wrong body" idea.  It is said that a person's gender identity is in the mind not in the body.  But then it is said that the sex of the body must be changed to be in harmony with the true gender identity of the person, which concedes that the sex of the body really is essential for sexual identity.

Moschella seems to be agreeing with Dr. Paul McHugh at Johns Hopkins University Hospital, who was responsible for shutting down the sex-change treatment program that had once been supervised by Dr. John Money.  Money was the one who invented the idea of "gender identity" as different from "sex identity," with the thought that "gender" is a mental or social construction that need not conform to biological nature.  This has supported the claim that while animals have "sex," only humans have "gender."  Moschella's animalism seems to deny the gender/sex dichotomy started by Money.

It should be said, however, that biological nature does sometimes throw up anomalous cases that cannot be clearly identified as male or female--the most obvious case being those who are born as true hermaphrodites, who combine male and female traits.  In response to a question about such cases of intersexuality, Moschella has conceded that these cases show a biological disorder creating confusion about biological sex identity, but gender dysphoria is not an intersexual anomaly of nature like this.

For Moschella to sustain this claim, she would have to refute the "developmental mismatch" theory of gender dysphoria.  According to this theory, in utero the sexual differentiation of the genitals occurs separately and earlier than the sexual differentiation of the brain, so that it is possible for the sexual identity of the body to differ from the sexual identity of the brain.  This is not a dualist theory that denies animalism, because both the body and the brain are biological realities of the human animal.

My argument here is that the central tendency of animal biology is to distinguish the sexual identity of males and females as determined by their complementarity for reproduction.  But while this holds for the most part, there are exceptional cases in nature where the bipolarity of male and female becomes confused.

Some of my other posts on intersexuality and gender dysphoria can be found herehere, and here.