Sunday, March 28, 2021

Phineas Gage and Damasio's Search for Moral Judgment in the Brain

 

                                                                  The Phineas Gage Story



                 A Daguerreotype of Phineas Gage, After His Recovery from the Accident


In 1994, I was fascinated by Antonio Damasio's account of how Phineas Gage had an iron bar blown through his brain, and how the damage to one region of his brain--the ventromedial prefrontal cortex (vmPFC)--turned him into something like a psychopath with no moral sense, showing the same changes in moral personality that Damasio had seen in his patients with damage to the frontal lobes of the brain. These patients had suffered no decline in their intellectual capacities for speaking and abstract reasoning, but they lacked the capacity for practical reasoning about how to act in socially appropriate and personally advantageous ways, because they lacked the moral emotions to motivate them in planning their lives.  What this showed was that there was no sharp separation between reason and emotion, because emotion was part of good practical reasoning (Damasio et al. 1994; Damasio 1994). 

Damasio's work had a crucial influence on my writing of Darwinian Natural Right in 1998, in which I argued that the biological ethics of human nature required a complex interaction of reason and emotion, which confirmed the rational emotivism of Aristotle, David Hume, and Adam Smith, and refuted the rationalist ethics of Immanuel Kant.  (I have written about Damasio's Spinozist neuroscience here and here.)

But then sometime around 2002, I read Malcolm Macmillan's An Odd Kind of Fame: Stories of Phineas Gage, and I was persuaded by him that Damasio's account of Gage was distorted by Damasio's failure to see that Gage's mental and moral decline was only temporary.  In recent weeks, I have been thinking more about this, and now I see that Damasio was at least partially correct about what Gage teaches us, although Damasio was mistaken in assuming that the damage to Gage was permanent and irreversible.  The plasticity of the brain allows for some limited recovery even from brain injuries as severe as that suffered by Gage.  Moreover, recent research in the neuroscience of morality suggests that while the brain does support moral judgment, there is no specifically moral center of the brain, but there is a complex neural network of brain regions that sustains moral experience.

On September 13, 1848, Gage was 25 years old, and he was the foreman over a work crew working for the Rutland and Burlington Railroad south of Cavendish, Vermont.  Their job was to prepare a flat roadbed for laying track by blasting through the rocky hills.  To do that, they had to bore deep holes in the rock, put blasting powder and a fuse in the hole, and then add sand and clay in the hole so that the blast's energy would be directed into the rock.  Gage had to use a tamping rod to pack the sand and clay.  He had had a blacksmith make a special rod for him that was three feet seven inches long, 1 and 1/4 inches in diameter, and weighing over 13 pounds.  The end of the rod entering the hole was tapered down to a point 1/4 inch in diameter.

He was working on a hole filled with powder and a fuse.  He was distracted by something he heard from his men, and he turned his head over his right shoulder to speak to them.  At that point, he dropped his rod into the hole, and the rod rubbing against the rock created a spark that ignited the powder.  The explosion launched the rod like a missile, entering the left side of Gage's face, passing behind his left eye, into the left side of his brain, exiting the top of the skull through the frontal bone, then passing out of his brain and landing some 80 feet away smeared with blood and brain.

Gage was thrown onto his back.  But, amazingly, within a few minutes, Gage was speaking and walking around.  He was taken into town where two doctors--Edward Williams and John Harlow--treated him.  Harlow took charge of the case and cared for him over the next six months.  Harlow kept notes on the case, and most of what we know about Gage comes from Harlow's published reports about Gage in1848 and 1868.  A few other doctors--particularly Henry Bigelow, professor of surgery at Harvard University--also saw Gage and wrote about him.

Harlow concluded that the damage to Gage's brain had been primarily to the anterior and middle lobes of the left cerebral cortex, so that whatever function that part of the brain served must have been destroyed.  Apparently, this lost function had something to do with moral personality, because that was the change that Gage showed.  In his 1868 report, Harlow wrote:

". . . His contractors, who regarded him as the most efficient and capable foreman in their employ previous to his injury, considered the change in his mind so marked that they could not give him his place again.  The equilibrium or balance, so to speak, between his intellectual faculties and animal propensities seems to have been destroyed.  he is fitful, irreverent, indulging at times in the grossest profanity (which was not previously his custom), manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires, at times pertinaciously obstinate, yet capricious and vacillating, devising many plans of future operation, which are no sooner arranged than they are abandoned in turn for others appearing more feasible.  A child in his intellectual capacity and manifestations, he has the animal passions of a strong man.  Previous to his injury, though untrained in the schools, he possessed a well-balanced mind, and was looked upon by those who knew him as a shrewd, smart business man, very energetic and persistent in executing all his plans of operation.  In this regard, his mind was radically changed, so decidedly that his friends and acquaintances said he was 'no longer Gage'" (reprinted in Macmillan 2000, 414-15).

These brief comments by Harlow were responsible for making Gage's story the most famous case of brain injury--often mentioned in textbooks of psychology and neurology, along with pictures of Gage's skull--because this seemed to show that human personality--the human soul or spirit--is a biological product of particular areas of the human brain that can be lost when the brain is damaged: Gage was no longer Gage.

From Gage's mother, Harlow learned about Gage's subsequent life.  He travelled around New England exhibiting himself, along with his tamping iron, to audiences who paid to see him.  In New York City, he was an exhibit at P. T. Barnum's Museum.  Then, in 1851, he worked in a livery stable in Hanover, New Hampshire, for a year and a half.  In August, 1852, he was hired by a businessman who was setting up a line of coaches in Chile at Valparaiso; and Gage worked in caring for the horses and driving a coach between Valparaiso and Santiago for seven years.

Then, in 1859, Gage became ill, and he decided to leave Chile and travel to San Francisco to live with his mother who had moved there.  Gage worked briefly for a farmer in Santa Clara.  But he began to have severe epileptic convulsions that killed him in 1860.

For some years, Harlow had lost track of Gage until he began to write to Gage's mother in 1866.  She reported how he had died.  At Harlow's request, she and her family agreed to exhume Gage's body and detach his skull so that they could deliver it to Harlow for scientific study.  Harlow arranged to have the skull and Gage's tamping iron given to the Harvard Medical School, where they became the most famous items in the school's museum.

In 1994, Damasio and his colleagues took measurements from Gage's skull at Harvard and used modern neuroimaging techniques to reconstruct the probable path of the rod through Gage's brain and thus determine the exact location of the damage.  They inferred that the ventral and medial sectors of both left and right prefrontal cortices were damaged.  So the damage was not limited to the left side as Harlow had said.  

Damasio had seen the same damage in some of his patients at the University of Iowa Hospitals, who displayed the same personality changes shown by Gage.  Like Gage, there was no decline in their general intelligence, memory, or learning; but they had lost their capacity for planning and executing good moral decisions.  This suggested that the ventromedial prefrontal cortex was the one region of the brain crucial for moral judgment.  Damage to this part of the brain seem to cause what Damasio called "acquired sociopathy," because those with such brain lesions behaved like sociopaths or psychopaths, who lacked any conscience or moral sense.  They understood intellectually the difference between good and bad conduct, but they lacked the emotional motivation to choose the good.  They were condemned, Damasio observed, "to know but not to feel" what they ought to do.

But was this really true for Gage?  Damasio claimed that before the accident Gage had been "a responsible, intelligent, and socially well-adapted individual, a favorite with peers and elders."  Then, after the accident, he suffered "profound change in personality"--"Gage was no longer Gage": "he had become irreverent and capricious.  His respect for the social conventions by which he once abided had vanished.  His abundant profanity offended those around him.  Perhaps most troubling, he had taken leave of his sense of responsibility."  As a consequence, "Gage never returned to a fully independent existence, never again held a job comparable to the one he once had" (Damasio et al. 1994, 1102).

Against Damasio, however, Macmillan argues that the moral degeneration in Gage after the accident was only temporary, and that he did show at least a partial recovery.  The evidence for this is that he did eventually support himself with some steady jobs.  He worked at the livery stable for a year and a half, and he worked as a coach driver in Chile for seven years.  Being a successful coach driver on the route between Valparaiso and Santiago would have required practical and social skills in satisfying the needs of his passengers.  A few years ago, Macmillan found evidence that in 1860 a Dr. Henry Trevitt, who had lived in Valparaiso, reported "that he knew Gage well; that he lived in Chile, where he was in stage driving; and that he was in the enjoyment of good health, with no impairment whatever of his mental faculties" (Macmillan and Lena 2010, 648).

Neuroscientists today often identify the frontal lobes as serving the "executive functions" of the brain in providing a central supervisory system that organizes practical decision-making, so that damage to the frontal lobes produces a "dysexecutive syndrome" in which people cannot organize their practical lives in a rational manner--their behavior becomes erratic and impulsive.  As Macmillan observes, rehabilitation programs for people who have suffered severe frontal lobe damage often try to restore some of this executive management by providing a tight routine of external structure that teaches the patients to organize their thoughts and actions through a daily repetition of a step-by-step process to achieve specific goals.

Macmillan thinks Gage's job as a stagecoach driver in Chile provided him an informal version of this rehabilitation program.  Macmillan has found some newspaper reports of the daily coach run between Santiago and Valparaiso--leaving Valparaiso at 4 a.m. for a 100-mile, 12-13 hour journey to Santiago.  We can imagine that Gage would have had to rise well before 4 a.m. to feed, groom, and harness the horses.  He would then have had to load the passengers' luggage, collect their fares, and then provide for their needs throughout the day, while skillfully driving the horses over rugged and crowded roads.  Thus, his daily work was organized by a strict external structure.  And he must have been good at this if he was employed in this for seven years, perhaps by the same employer (Macmillan and Lena 2010, 645).

Macmillan recognizes this as the kind of rehabilitation that was developed by Aleksandr Romanovich Luria, a famous Soviet neurologist, for the rehabilitation of Red Army soldiers with frontal lobe brain injuries from World War II.  Luria believed that the frontal lobes of the brain allow us to use an internal language to plan and regulate our actions to achieve our goals: we talk ourselves through our day.  Patients with damage to the frontal lobes must learn how to do this.  Luria would have supervisors talk to their patients, telling them what to do step-by-step to achieve some simple goal.  Then the patients would be told to repeat these words to themselves as they moved through each step of the task.  This would be done over and over every day in exactly the same way, until finally the patients would develop a simple internal language of supervising their own behavior so that their lives would become highly structured.  Luria admitted, however, that complete success--particularly with massive frontal damage in both lobes--was almost never achieved.  Very few of his patients learned to live independently (Luria 1980, 246-365).

Macmillan sees a similar kind of rehabilitation program in some reports of how people with severe frontal lobe damage can learn to control their conduct when they are habituated by structured environments of behavioral conditioning.  For example, Thomsen, Waldemar, and Thomsen (1990) have related the 20-year case history of a young woman who at age 17 was involved in a car accident that killed both of her parents and left her with bilateral frontal lobe damage.  She regressed to a state of extreme childishness with grasping, sucking, and yawning movements.  She showed almost no emotion, and she could not establish any emotional contact with anyone.  She could not care for herself, and so she had to live in nursing homes for over 10 years.  Before the injury, she had completed 9 years of schooling, and she had had good relationships with her schoolmates and her teachers.  She had normal intelligence.  But those who knew her thought she was rather immature.

13 years after her injury, she was 30 years old, and she began living with a 45-year-old man, who cared for her, and without having any sexual relationship with her.  He patiently wrote out a program for how she should do the housework, which he read to her every day in exactly the same words.  He praised her when she did something well.  After a full year of this, she showed no improvement.  But by the second year, he had some success, in that she did the housework and the shopping without his assistance.  She was no longer restless.  She spoke kindly about her partner and his family.  But she remained childish in her mind and character.

Just like Luria's patients, people like this woman with massive frontal lobe damage can show some improvement in managing their life when they are guided by a strict program of behavioral conditioning, but the success is very limited, and they never recover the normal moral judgment that they had before their brain injury.

To me, that confirms Damasio's conclusion about people like Gage--that without the normal functioning of the ventromedial prefrontal lobes, human beings lose their capacity for good moral character.  Even Macmillan seems to concede that when he writes:

"Phineas Gage made a surprisingly good psycho-social adaptation: he worked and supported himself throughout his post-accident life; his work as a stagecoach driver was in a highly structured environment in which clear sequences of tasks were required of him; within that environment contingencies requiring foresight and planning arose daily; and medical evidence points to his being mentally unimpaired not later than the last years of his life.  Although that Phineas may not have been the Gage he once had been, he seems to have come much closer to being so than is commonly believe" (Macmillan and Lena 2010, 655).

"Phineas may not have been the Gage he once had been."  So Gage was no longer Gage?

I assume, however, that Macmillan would want to insist that Damasio is still wrong in claiming--like the phrenologists--that moral judgment resides in one specific part of the brain.  Now Damasio does indicate his partial agreement with the phrenologists of the 19th century.  He agrees with Franz Joseph Gall's claim that the brain is the organ of the spirit.  He also agrees with the phrenologists in that "brain specialization is now a well-confirmed fact."  But he disagrees with the claim that each function of the brain depends on a single "center" that is independent of the other parts of the brain.  Instead of that, he sees that each mental function--such as vision, language, or morality--arises from systems of interconnected brain regions.  So while the ventromedial prefrontal cortices are important, perhaps even necessary, for moral judgment, the execution of this function depends on a collection of systems in which many parts of the brain must be properly connected (Damasio 1994, 14-17, 70-73).  So Sandra Blakeslee (1994) was mistaken in her article on Damasio's research when she said that he had identified the "brain's moral center."

I will develop this point--that morality depends on the complex interaction of many different parts of the brain--in my next post.


REFERENCES

Blakeslee, Sandra. 1994. "Old Accident Points to Brain's Moral Center." New York Times, May 24.

Damasio, Antonio. 1994. Descartes' Error: Emotion, Reason, and the Human Brain. New York: G. P. Putnam's Sons.

Damasio, Hanna, Thomas Grabowski, Randall Frank, Albert Galaburda, and Antonio Damasio. 1994. "The Return of Phineas Gage: Clues About the Brain from the Skull of a Famous Patient." Science 264: 1102-1105.

Luria, Aleksandr Romanovich. 1980. Higher Cortical Functions in Man. Second Edition. Trans. Basil Haigh.  New York: Basic Books.

Macmillan, Malcolm. 2000. An Odd Kind of Fame: Stories of Phineas Gage. Cambridge: MIT Press.

Macmillan, Malcolm, and Matthew Lena. 2010. "Rehabilitating Phineas Gage." Neuropsychological Rehabilitation. 20: 641-658.

Thomsen, Inger Vibeke, Gunhild Waldemar, and Anne Marie Thomsen. 1990. "Late Psychosocial Improvement in a Case of Severe Head Injury with Bilateral Fronto-Orbital Lesions."  Neuropsychology 4: 1-11.

Wednesday, March 17, 2021

What Do People Really Do in a Realistic Trolley Dilemma Experiment?

You are walking along the tracks of a trolley in San Francisco.  You see a runaway trolley that will kill five people who have become somehow bound to the tracks.  You also see that there is a switch that will turn the trolley onto a side track, a spur, and thus save the lives of the five people.  Unfortunately, however, there is one person bound to the side track, and so if you throw the switch, he will be killed.  Should you throw the switch?

On another day, you are walking on a footbridge over the tracks.  You see another runaway trolley speeding toward five people bound to the track.  This time, there is no possibility of switching the trolley to a side track.  You could jump onto the track to try to stop it, but you are such a small person that you probably could not stop the trolley.  You notice that there's a big fat man on the bridge who is big enough to stop the train if you push him onto the track.  Should you push the fat man?

Oh, I know, this runaway trolley scenario sounds too cartoonish to be taken seriously.  But it does capture the moral dilemma that people can sometimes face--perhaps in war--when it seems that some people must die to save the lives of many more.  Although killing someone is usually wrong, there are circumstances in which killing is justifiable--such as killing in self-defense or in defense of the lives of others.

Of the hundreds of thousands of people all around the world who have participated in formal Trolley Dilemma surveys, most people (80% to 90% in some studies) would divert the trolley in the Switch Case, but most of them (around 75%) would not push the fat man in the Footbridge Case.  As reported by Paul Bloom (in Just Babies, 167-68), even three-year-old children presented with the trolley problem (using Lego people) will tend to say that throwing the switch is right, but pushing the man off the bridge is wrong. What is most striking about this is that most people react differently to the two cases although pulling the switch and pushing the fat man have identical consequences--one person dies to save five.  Why?

Joshua Greene thinks that if you scan the brains of people with fMRI while they are deciding this Trolley Dilemma, you will see the neural activity that explains why people decide this the way they do; and this will reveal the neural correlates of moral judgment.  Previously, I have written about the Trolley Dilemma (here and here) and about Greene (herehere, and here).  Recently, I have been thinking more about this as possibly showing how conscience and guilt arise in the brain.

But right now I am wondering whether what people say they would do in the hypothetical Trolley Dilemma situation shows us what they would actually do in a real Trolley Dilemma situation.

As far as I know, the first realistic Trolley Dilemma experiment was done by Michael Stevens for his "Mind Fields" YouTube video series here  It's about 35 minutes long.  This video is entitled "The Greater Good," and it was Episode 1 of the second season, which first appeared online December 6, 2017.

The experiment is clever.  Seven people were recruited to participate in a focus group for the "California Rail Authority" (a fictitious organization).  They think they will be asked questions about what they would like to see in high-speed rail transportation.  When they arrive for their meeting, at a CRA trailer on a hot day, each individual is told that there is going to be a 15 minute delay, and while they are waiting, they can sit in an air-conditioned remote railway switching station.

Inside the switching station, they meet a switchman who sits before a panel of screens with (apparently) live pictures of a remote railway switching location.  The switchman explains how he switches a train from track 1 to track 2.  A train is coming through, and he has the duped subject actually switch the train onto track 2.  Then the switchman receives a phone call, and he says he has to leave for a short time, but the subject should stay in the switching station.  What the subject does not know is that what he sees on the video screens is all prerecorded, and the switchman is an actor.  The subjects do not know that everything they do is being filmed by hidden cameras.

Sitting alone in the switching station, the subject sees railway workers walking onto the tracks, and hears a loud warning "objects on the tracks."  One worker walks onto track 2, and he appears to be distracted by taking a telephone call.  Five workers walk onto track 1, and they are wearing sound-proofing earphones.  All the workers have their backs turned to the train that is shown approaching, and the subject sees this train coming and the warning "a train is approaching" on track 1.  The workers do not seem to see or hear the approaching train.  So the subject must think that the 5 men on track 1 will be killed unless the train is switched onto track 2, so that 1 man will be killed.  Should the subject pull the switch?

You might think this is an unethical experiment.  Isn't it wrong to trick people into going through such a traumatic experience?  After consulting with psychologists, Stevens decided this would be an ethical experiment as long as they screened the people they recruited and excluded those who showed the personality traits that might predispose them to be excessively traumatized by the experiment.  He also designed the experiment to minimize the traumatic effects.

7 individuals went through the experiment.

The first was Elsa.  When she was left alone in the switching station, and she saw the train approaching the men on the tracks, she became visibly disturbed as she intently stared at the screens.  Just a few seconds before the train would have hit the 5 men on track 1, she switched the train onto track 2, expecting that the one man would then be killed.  But as soon as she pulled the switch, the screens went blank, and this message was flashed on the screen:  "End of test.  Everyone is safe."  She did not actually see the man on track 1 being hit.  And immediately Stevens and a psychologist entered the building and told her that this was all an experiment, and all these people were actors.

They asked Elsa about what she was feeling and thinking.  She was frightened by what she saw on the screens.  "Their lives are in my hands," she thought.  "I must save more lives."  "I didn't know if I made the right decision. . . . But a life is a life."

When people are asked to make a decision about flipping the switch in the hypothetical Trolley Dilemma, most of them agree with Elsa and decide to kill one person to save five.  Surprisingly, however, in this experiment, in which people thought they were really deciding life or death, most of the participants--5 out of 7--refused to flip the switch.  The five people that came after Elsa all froze in fear as they watched the train approaching the men, and so these five refused to flip the switch.

Afterwards, each of the 5 said that they felt terrified, and they froze up so that they could not move their hands to the switch.  "I thought about it, but I couldn't do it," one said.  They all suggested that they had fleeting thoughts that surely the men would somehow escape from being killed.  Surely the workers would notice the train and jump away.  Or maybe the train had sensors that would stop it to avoid hitting them.  Or maybe there are other railway people watching over this scene who can stop the train.  They were looking for some way to escape the tragic inevitability of the choice between one dying and five dying.  One person said: "I didn't know who should live, who should die. . . . Switching or not, someone would be hurt."

The last of 7 in the experiment was Cory, and like Elsa, he flipped the switch.  But we see and hear him talking to himself: "Oh no. . . . They should see this. . . . Oh my God!"

We see the terror in Cory's face, and afterwards he expresses how horrified he felt in taking responsibility for his decision.  "5 people versus 1," he says.

Cory burst into tears once Stevens and the psychologist came into the building.  He was clearly distraught by what he had done.

I know we probably shouldn't draw conclusions from an experiment involving only 7 individuals, but I will go ahead anyway and draw three conclusions.  First, this experiment suggests that for most people, what we think we would do in a moral dilemma like this differs from what we would actually do.  Most of us decide the hypothetical Trolley Dilemma by saying that we would flip the switch, and thus endorse the utilitarian calculation that taking one life to save five is the "greater good."  But if we were in a real tragic situation like this--or one that we thought was real--most of us would freeze, refusing to kill the one man to save five.

If this is so, how do we explain it?  Was there something about Stevens' experiment that made the flipping of the switch feel like a "personal" harming of the one man on track 2--making it feel more like the pushing of a fat man off the footbridge into the path of the train?  If Stevens could have scanned the brains of these 7 people during the experiment, would he have seen that the more emotional parts of the brain (amygdala, posterior cingulate cortex, and medial prefrontal cortex) were active, countering the more calculating parts of the brain (the dorsolateral prefrontal cortex and the inferior parietal lobe), which is what Joshua Greene and his colleagues in 2001 found in the brains of those deciding the Footbridge Dilemma?

Was there something about the circumstances of Stevens' experiment that made at least five of the seven people feel that flipping the switch would violate the instinctive moral principle of double effect because they would be directly targeting the man on track 2 for death?

My second conclusion is that in a tragic dilemma like this, there is no right choice.  That's what makes it tragic!  So both those who pull the switch and those who don't can feel justified.

Here I seem to disagree with Stevens.  In his closing comments, he asks the question, Is it wrong to freeze?  And he implies that the answer is yes, because he says we can learn to overcome our propensity to freeze to serve the "greater good."  Both he and the psychologist appear to praise Elsa and Cory for having made the right choice.  But I don't see that.  And in the meeting of all the participants at the end, they share their experiences in a way that suggests that those who froze need not feel ashamed of their decision not to throw the switch.

My final conclusion from this experiment is how is shows both reason and emotion in moral judgment.  These 7 individuals had only a couple of minutes to decide the shocking moral dilemma they faced.  But in that short time, they all showed deep emotional disturbance, and they all engaged in some reasoning.  They all assessed the situation emotionally and rationally before making their decision.  This teaches us a general truth about human nature--the moral judgment is both rational and emotional.  But it also teaches us about the individuality of moral decisions in which different people will come to different decisions where there is no clearly right answer.

Sunday, March 14, 2021

The Neural Correlates of First-Party Punishment: Conscience and Guilt

 "Evolution built us to punish cheaters."  That's how Judge Morris Hoffman began his brilliant book The Punisher's Brain: The Evolution of Judge and Jury (2014).  In that book, Hoffman laid out the evidence and reasoning for the claim that social life and human civilization generally depend on our evolved instinct to punish ourselves and others when we or other people violate those social norms of cooperation that sustain any social order.  

I have written about Hoffman's book in a previous post.  I have suggested that what Hoffman says about the natural instinct for punishing those who disobey social norms corresponds to what John Locke identifies as the natural right of all people to punish those who transgress the law of nature, which is that "no one ought to harm another in his Life, Health, Liberty, or Possessions" (Second Treatise, 6).  This Lockean law of nature corresponds to what Hoffman calls the three rules of right and wrong rooted in our evolved human nature to secure property and promises.  Rule 1: Transfers of property must be voluntary. Rule 2: Promises must be kept.  Rule 3: Serious violations of Rules 1 and 2 must be punished.  Like Locke, Hoffman interprets "property" in a broad sense as starting with self-ownership and encompassing one's life, health, and possessions, as well as the life, health, and possessions of one's family and others to whom one is attached.  Understood in this broad way, Rule 1 embraces criminal law and tort law, while Rule 2 embraces contract law.

Hoffman and Locke also agree in identifying three levels of natural punishment.  Through self-punishment or first-party punishment, we punish ourselves through conscience and guilt.  Through second-party punishment, we punish those who harm us by immediately retaliating against them or by later taking revenge against them.  Through third-party punishment, we punish those who have harmed other people.

If Hoffman is right in claiming that biological evolution has built our brains to express this kind of punishment, then we should expect to see neural correlates for punishment at all three levels.  Hoffman surveys the neuroscientific evidence for this.

In this post, I am beginning a series of posts reviewing this evidence, some of which was mentioned by Hoffman, but also some that has emerged over the past eight years.

Let's start with self-punishment.  We punish ourselves by blaming ourselves for our misconduct, which is expressed through feelings of conscience and guilt.  In feeling guilt, we blame ourselves for our past misconduct: we recognize that we have wrongly harmed others, and that they can rightly punish us.  In feeling a conscience, we imagine blaming ourselves for some future misconduct, and this gnaw of conscience can motivate us to refrain from that misconduct.

From Charles Darwin to Edward Westermarck to Jonathan Haidt, evolutionary psychologists have explained these feelings of conscience and guilt as instinctive evolutionary adaptations for human beings as social animals who need to enforce the social norms of cooperation by punishing themselves for cheating.  If this is true, then we should see evidence for these evolutionary adaptations in the human brain.

To search for such evidence, we need to somehow see the mind thinking in the brain.  That became possible for the first time in the late 1970s with the invention of the positron camera and positron emission tomography (PET scan).  Like magnetic resonance imaging (MRI), the PET scan depends on a fundamental postulate--"neurovascular coupling"--first proposed by neurologist Charles Sherrington in 1890: the most active parts of the brain will show an increase in blood flow in the vessels supplying them, because greater neural firing requires greater energy provided by the oxygen and glucose in the increased blood flow.  This postulate was confirmed in the 1950s by the neurosurgeon Wilder Penfield: while operating on people with severe epilepsy, he would wake them up during the surgery, ask them to move their fingers, and he could see changes in color from an influx of blood to regions of the brain active in motor control.  In the 1970s, neuroscientists David Ingvar and Niels Lassen developed a brain imaging method, by which a radioactive gas was injected into the carotid artery, so that a scintillation camera at the side of the subject's head could record the circulation of blood, which became the first functional imaging of the brain at work (Le Bihan 2015).

The PET scan also depends on radioactivity.  Water that has been made radioactive in a cyclotron is injected in a vein of the arm.  When the oxygen nucleus of water has been rendered radioactive, it ejects a positron (a positively charged electron) for a few minutes.  When the radioactive water reaches the brain, the positron camera can record the higher quantity of positrons in those regions of the brain with increased blood flow.

In 2000, experimenters used PET scanning for the first neuroimaging study of guilt (Shin et al. 2000).  They tested eight male participants for their experience of guilt.  The participants were asked to write descriptions of two kinds of past personal events--one emotionally charged event that make them feel the most guilt they had ever experienced and two other events that created no deep emotion.  These descriptions were then modified so that they were written in the second person and in the present tense.  These scripts were read and tape-recorded in a neutral male voice for playback in the PET scanner.  They were asked to listen carefully to the scripts and imagine the event as vividly as possible.  After coming out of the scanner, they were asked to rate the intensity of their emotional states during the readings of the guilt and neutral scripts on a scale from 1 to 10.

Their average subjective rating for guilt was 8.8 for the guilt script and 0 for the neutral script.  For shame, the average was 7.4 for the guilt script and 0 for the neutral script.  For disgust, the average was 6.5 for the guilt script and 0 for the neutral script.

As compared with the neutral script, the PET scans showed increased blood flow to three areas of the paralimbic regions of the brain: the anterior (front) temporal poles, the anterior cingulate gyrus, and the anterior insular cortex/inferior frontal gyrus.

The paralimbic cortex surrounds the middle and lower parts of the brain's two hemispheres.  It is a network of brain structures associated with emotional processing, goal setting, motivation, and self-control.  This PET scanning study suggests that some of the neural circuitry in this paralimbic network supports the human experience of guilt by which we punish ourselves for violating social norms.  And once we have learned how guilty we feel from our past misconduct, we will feel the pangs of conscience when we contemplate some similar misconduct in the future.

We should recognize, however, that neuroimaging studies like this have some serious limitations.  First, the sample size is small (only eight individuals).  Second, it relies on self-reporting, so that we must trust that these eight people honestly and accurately reported their experience of guilt.  Third, the spatial resolution of PET is limited in ways that can create errors in neuroanatomical localization.

Neuroimaging with MRI produces clearer, more precise images.  While PET relies on the oxygen nucleus of the water molecule and radioactivity, MRI relies on the hydrogen nucleus of water and magnetism.  MRI uses a large magnet to create an intense magnetic field that is strong enough to magnetize the proton in the nucleus of the hydrogen atom in the water molecule.  The MRI scanner uses radio waves to excite the protons so that they emit radio wave signals that can be detected by the scanner.

The hemoglobin of red blood cells attaches to oxygen in the lungs and transports the oxygen to the organs of the body through the arteries.  Hemoglobin contains an atom of iron that is magnetized.  As long as it is attached to oxygen, it is "diamagnetic"--it is repelled by a magnetic field.  When the hemoglobin has released the oxygen, the hemoglobin becomes "paramagnetic"--it is attracted by a magnetic field and becomes like a little magnet.

In the blood vessels of the brain, some of the oxygen is released, and the red blood cells are enriched with deoxygenated hemoglobin and so magnetized.  These magnetized red blood cells change the local magnetic field and disturb the magnetization of water molecules around them, which lowers the MRI signal sent by the hydrogen protons.  A computer analysis of these changing signals can then generate images showing the most active parts of the brain as indicated by variation of oxygenation of the blood.

In 1992, researchers in four groups in the United States showed how this could be done so that subjects in a MRI scanner would have visual images presented to them, and the scanner could then generate images of neural activity in the back of their brains in their primary visual fields (Ogawa et al. 1992).  This was the beginning of functional brain imaging by MRI (fMRI), using the method called BOLD (blood oxygen level dependent).



An fMRI Image of Human Neural Activity in the Primary Visual Fields at the Back of the Brain (the Occipital Lobe)

One systematic review of 16 fMRI studies of guilt found three kinds of methods for measuring guilt (Gifuni, Kendal, and Jollant 2017).  One method was to have subjects read a script involving guilt and then evaluate the emotion evoked by the script.  A second was to ask subjects to relive a guilt-causing event from their past or to imagine themselves in a imaginary guilt-causing event.  A third method was to put subjects in a social situation that might elicit guilt, such as playing economic behavioral games or other kinds of interpersonal games.  The MRI scanner would then identify the areas of their brains that were most active during their experience of guilt.

This review identified a distributed network of brain regions involved in processing guilt.  There were 12 clusters of brain activation located in the prefrontal, temporal, and parietal regions, mostly in the left hemisphere.  "Together, these interconnected regions have been associated with a wide variety of functions pertaining to guilt, including self-awareness, theory of mind, conceptual knowledge, moral values, conflict monitoring and feelings of moral disgust" (Gifuni, Kendal, and Jollant 2017, 1174).

In general, brain scanning studies have shown that moral experience elicits greater activity in brain regions for emotional processing, social cognition (including reading the minds of others), and abstract reasoning about the past and future.  These regions include the ventromedial and dorsolateral prefrontal cortex, the amygdala, superior temporal sulcus, bilateral temporoparietal junction, posterior cingulate cortex, and precuneus.  In other words, "many brain areas make important contributions to moral judgments although none is devoted specifically to it" (Greene and Haidt 2002, 517).  

Where in the brain is morality?  The answer seems to be: Everywhere and nowhere (Young and Dungan 2012).  There is no specifically moral organ or moral brain set apart from the rest of the brain: in a sense, the moral brain is the whole brain, because human morality depends on "the brain's general-purpose machinery for representing value, applying cognitive control, mentalizing, reasoning, imagining, and reading social cues" (Greene and Young 2020, 1009).

This indicates that the Kantian philosophers are wrong in assuming that morality is an autonomous human activity of pure practical reason belonging to a realm of freedom that transcends the realm of nature, including the human nature of the human body and brain.

I will say more about this in future posts. 


REFERENCES

Greene, Joshua, and Jonathan Haidt. 2002. "How (and Where) Does Moral Judgment Work?" Trends in Cognitive Sciences 6: 517-523.

Greene, Joshua, and Liane Young. 2020. "The Cognitive Neuroscience of Moral Judgment and Decision-Making." In David Poeppel, George R. Mangun, and Michael S. Gazzaniga, eds., The Cognitive Neurosciences, 1003-1013. 6th edition. Cambridge: MIT Press.

Gifuni, Anthony J., Adam Kendal, and Fabrice Joliant. 2017. "Neural Mapping of Guilt: A Quantitative Meta-Analysis of Functional Imaging Studies."  Brain Imaging and Behavior 11: 1164-1178.

Hoffman, Morris. 2014. The Punishing Brain: The Evolution of Judge and Jury. Cambridge: Cambridge University Press.

Le Bihan, Denis. 2015. Looking Inside the Brain: The Power of Neuroimaging. Princeton, NJ: Princeton University Press.

Young, Liane, and James Dugan. 2012.  "Where in the Brain is Morality? Everywhere and Maybe Nowhere."  Social Neuroscience 7: 1-10.