Knowing Neurons
Big IdeasCognition

What can the trolley problem tell us about the brain?

By Shiri Spitz Siddiqi

You’re standing at a railroad switch as a runaway trolley hurtles toward five innocent people lying trapped in its path. Out of harm’s way lies one other person, trapped on a diverging track. Divert the trolley onto this track and you’ll maximize the number of survivors, but you’ll cause someone’s death in the process. Do nothing and you might spare your conscience, but the human cost will be much higher. What should you do?

Perhaps most widely known as a meme today, the trolley problem and its variations have been used to great effect for decades by philosophers and scientists studying moral judgment and decision-making

This sacrificial dilemma is known as the trolley problem, a thought experiment dreamed up by philosopher Philippa Foot in 1967. Perhaps most widely known as a meme today, the trolley problem and its variations (Thomson, 1985) have been used to great effect for decades by philosophers and scientists studying moral judgment and decision-making. The charm of these thought experiments for researchers is that they pit two ethical perspectives against each other. The first is the ethic of deontology, defined by adherence to absolute, “thou-shalt-not-kill”-type moral rules (Kant, 1785/1959). The other is the ethic of utilitarianism, characterized by a cost-benefit analysis focused on maximizing beneficial consequences (Mill, 1861/1998). In the trolley problem, deontological ethics prohibit sacrificing the one life to save the five, no matter the potential for maximizing overall welfare. By contrast, utilitarian ethics require killing the one to save the five, even though it may violate sacred moral norms against killing. By forcing respondents to choose between these ethical approaches, researchers learn about the concerns driving people’s moral judgments, be they consequences, moral norms, or a general preference for remaining passive versus intervening in high-stakes situations (Gawronski et al., 2017). Using techniques like fMRI, neuroscientists have enriched this understanding by using the trolley problem to learn about the emotional and cognitive processes underpinning the nuances of moral judgment.

image.png

Caption: This highly recognizable illustration of the trolley problem is thought to have been originally posted to philosophy professor Jesse Prinz’s personal website (http://subcortex.com/pictures/) sometime around 2009.

A puzzle for researchers

Researchers working with the trolley problem uncovered an intriguing phenomenon early on. Studies consistently find that a majority of participants appear comfortable making the utilitarian choice in the original trolley problem, often called the “switch” dilemma (e.g., Hauser et al., 2007). In a variation of the trolley problem called the “footbridge” dilemma, the five deaths can be prevented by pushing a large onlooker off of a footbridge and onto the tracks. Unlike the switch dilemma, where most people approve of diverting the trolley, most people presented with the footbridge dilemma disapprove of pushing the large onlooker to their death (Hauser et al., 2007). Pushing the onlooker in the footbridge dilemma produces the same end result as diverting the trolley in the switch dilemma: one life is sacrificed to save five. So, why do people become staunch deontologists when presented with the footbridge dilemma? Why should one sacrificial method feel more acceptable than the other if both ultimately produce the same result?

image.png

Caption: The footbridge dilemma. Credit to Jesse Prinz at http://subcortex.com/pictures/.

Harvard neuroscientist Joshua Greene believes the answer lies in the different neural responses evoked by each version of the dilemma. Consider what is required to maximize well-being in each case: you can save the five in the switch dilemma by pulling a lever, but achieving the same outcome in the footbridge dilemma requires physically pushing an innocent bystander to their death. The decreased physical and psychological distance between the decision maker and the sacrificial victim in the footbridge dilemma (which the researchers categorize as “personal”) triggers a strong emotional reaction that is less likely to occur in the switch dilemma (which they categorize as “impersonal”), where no contact with the victim is necessary. The emotional response evoked by the footbridge dilemma might interfere with one’s ability to engage in cost-benefit analysis, with a few possible consequences: one could either be more likely to refuse pushing the onlooker, or, if one did decide that pushing was appropriate, that decision would be delayed due to the additional effort needed to suppress the initial emotional reaction. This interference would be less pronounced in an impersonal dilemma because there would be a weaker emotional reaction to suppress.

The emotional response evoked by the footbridge dilemma might interfere with one’s ability to engage in cost-benefit analysis

Greene and his colleagues supported these ideas in a seminal 2001 paper in which they presented evidence that personal and impersonal dilemmas evoke different brain responses. In two experiments, the researchers asked participants to respond to each of 60 dilemmas while their brains were being scanned in a magnetic resonance imaging (MRI) machine. Each dilemma was categorized as non-moral (such as whether to take a bus or a train), moral-personal (e.g., the footbridge dilemma), or moral-impersonal (e.g., the switch dilemma). For each dilemma, participants were asked to indicate whether they considered the suggested course of action “appropriate” or “inappropriate.” A response of “appropriate” in the moral dilemmas indicated acceptance of the utilitarian choice, while a response of “inappropriate” indicated a preference for the deontological choice. As participants completed each dilemma, the MRI machine recorded their brain activity and the level of connectivity between different brain regions.

Analyses of the participants’ brain activity suggested that, as the authors had predicted, moral-personal dilemmas uniquely trigger emotional processing and interfere with processes associated with utilitarian cost-benefit calculations. Portions of the medial frontal gyrus, posterior cingulate gyrus, and angular gyrus — areas associated with emotion — all showed greater activation for moral-personal dilemmas compared to moral-impersonal and non-moral dilemmas. Meanwhile, brain regions associated with working memory, such as the parietal lobe and middle frontal gyrus, were less active for moral-personal dilemmas compared to the other dilemma categories. Participants also tended to take longer to respond “appropriate” on moral-personal dilemmas relative to the other dilemma categories. Combined with the heightened emotional processing, the time-delay associated with utilitarian responses in moral-personal dilemmas suggests that there was greater conflict between participants’ emotions and their reasoning, which needed to be overcome before they could reach a decision. Finally, participants were much quicker to respond “inappropriate” in moral-personal dilemmas than they were to respond “appropriate,” a pattern that didn’t show up for moral-impersonal or non-moral dilemmas, further suggesting that moral-personal dilemmas are psychologically unique.

Analyses of the participants’ brain activity suggested that, as the authors had predicted, moral-personal dilemmas uniquely trigger emotional processing and interfere with processes associated with utilitarian cost-benefit calculations

Dual moral processes

Greene and colleagues’ explanation for this pattern of results borrows from cognitive psychology, a branch of psychology that focuses on understanding how our minds organize and manipulate information. They adopted what is known as a “dual-process” approach, which assumes that mental processes can be categorized as two basic types: automatic or controlled (Kahneman, 2003). Although mental processes may be better conceptualized as falling on a continuum ranging from fully automatic to fully controlled, dual-process theories generally lump them under either the “automatic” umbrella (termed “System 1” processing) or the “controlled” umbrella (termed “System 2” processing). System 1 processing is thought to be rapid and effortless thanks to its reliance on emotion and intuition, while System 2 is thought to be slower, effortful, and more systematic. According to Greene’s dual-process theory of moral judgment (2001, 2004, 2013), deontological reasoning involves System 1, and utilitarian reasoning involves System 2. In other words, utilitarian reasoning wins out when our emotions don’t protest too much, but when they do, our effort-conserving mental systems nudge us along the path of least resistance.

Recent advances

More recent findings have substantiated the idea that deontology is intuitive and rooted in emotion, while utilitarianism is cognitively effortful and deliberative. For example, later work by Greene’s team found that participants who tended to respond in a utilitarian fashion had elevated activity in the dorsolateral prefrontal cortex (Greene et al., 2004), a region implicated in cognitive control. Additionally, having participants complete an attention-consuming task simultaneously with their dilemma judgment (thereby reducing their cognitive resources) causes them to make more deontological judgments (Greene et al., 2008). Personal moral dilemmas also elicit increased activity in the amygdala (Greene et al., 2004), further supporting the idea that the tendency towards deontological responding in these dilemmas is due to an aversive emotional response. Consistent with this, putting participants in a good mood (thereby reducing the negative emotional response to the dilemma) interestingly makes them more utilitarian (Valdesolo & DeSteno, 2006).

More recent findings have substantiated the idea that deontology is intuitive and rooted in emotion, while utilitarianism is cognitively effortful and deliberative

And yet, the most compelling evidence for Greene’s dual-process theory comes from individuals with damage or abnormalities in brain areas implicated in moral decision-making. Damage to the ventromedial prefrontal cortex, known to cause emotional deficits, is associated with more utilitarian judgments in moral dilemmas (Ciaramelli et al., 2007; Greene, 2007; Koenigs et al., 2007), even when the utilitarian judgment requires prioritizing the lives of strangers over the lives of family members (Thomas et al., 2011). Frontotemporal dementia, also known to cause “emotional blunting,” is also associated with stronger utilitarian tendencies in these dilemmas (Mendez et al., 2005). Similarly, psychopaths both react less strongly to others’ moral transgressions (Harenski et al., 2010) and experience fewer qualms about behaving immorally (Abe et al., 2018) and harming others (Patil, 2015). These abnormalities are typically attributed to reduced activation in the amygdala, which is also less connected with the ventromedial prefrontal cortex (Decety et al., 2013), which in turn disrupts their emotion-processing capacity. It comes as no surprise, then, that psychopathic traits are associated with stronger utilitarian tendencies (Bartels & Pizarro, 2010).

Newer research has added nuance to some of these earlier findings. Rather than being associated with emotion per se, some of the brain regions that Greene and others interpreted as representing increased emotional activation in moral-personal dilemmas (e.g., the medial prefrontal cortex, the medial parietal cortex, the temporo-parietal junction) are now thought to constitute a neural system called the default mode network (Buckner et al., 2008; Greene & Young, 2020). This system is most active during imaginative tasks such as thinking about ourselves, others, and our performance on ongoing and future tasks (such as a moral dilemma; Raichle, 2015; Sormaz et al., 2018). The default mode network presumably discourages utilitarian responding by enabling vivid mental representations of unpleasant courses of action (e.g., the pushing of the onlooker off of the footbridge). In line with this idea, researchers have found that people with more visual cognitive styles tend to make more deontological judgments in moral dilemmas (Amit & Greene, 2012).

Rather than being associated with emotion per se, some of the brain regions that Greene and others interpreted as representing increased emotional activation in moral-personal dilemmas are now thought to constitute a neural system called the default mode network

As for Greene’s dual-process theory, it has not gone unchallenged. Bence Bago and Wim De Neys, for example, used a paradigm in which participants responded to each moral dilemma twice: once while completing an unrelated memorization task under time pressure, and once again with no constraints (2019). In each of four studies, they found that 70% of the participants who eventually made the utilitarian choice given unlimited time made the same choice under time pressure and cognitive load, conditions that supposedly trigger intuitive processing. This finding challenges the dual-process prediction that utilitarian decision-making requires time and deliberation, suggesting instead that people can experience utilitarian intuitions just like they can experience deontological intuitions. Other researchers cast doubt on what, exactly, “utilitarian” responses mean in the context of sacrificial dilemmas. Kahane and colleagues, for example, present evidence that “utilitarian” responses in sacrificial moral dilemmas have little or even a negative relationship with core aspects of utilitarian ethics, such as identification with all of humanity (2015). Rather than reflecting genuine concern for the greater good, these authors argue that utilitarian responses more likely reflect a level of comfort with inflicting harm on others. Although this critique does not challenge the observed relationships between utilitarian responding and brain anatomy, it does highlight potential pitfalls associated with using people’s responses in sacrificial moral dilemmas to understand their animating moral concerns.

So what?

Although none of us will (probably) ever face a situation exactly like the trolley problem, this thought experiment and other sacrificial dilemmas have allowed researchers to examine the decision-making processes that may operate in real-life dilemmas. Perhaps most salient in recent memory is the COVID-19 pandemic, when healthcare workers struggled to allocate life-saving equipment to patients, and school administrators grappling with whether to re-open schools had to weigh students’ learning against the immediate health of their community. Other examples include emergencies like the September 11th attacks, when U.S. Vice President Cheney had to consider authorizing the military to shoot down hijacked planes carrying American citizens. Moral dilemma research, particularly research that contrasts personal and impersonal dilemmas, suggests that it’s easier to prioritize the greater good when we’re not forced to look our victims in the eyes. Cheney was in a bunker when he authorized the military to shoot down the hijacked planes (but, incidentally, the orders were never carried out). The many school administrators who voted to reopen schools likely had no way of quantifying the additional deaths that their decision would cause. We can’t know what they would have decided under different circumstances, but moral dilemma research suggests that psychological distance from the victims and uncertainty about the outcome may have influenced these decisions.

Perhaps most salient in recent memory is the COVID-19 pandemic, when healthcare workers struggled to allocate life-saving equipment to patients, and school administrators grappling with whether to re-open schools had to weigh students’ learning against the immediate health of their community… moral dilemma research suggests that psychological distance from the victims and uncertainty about the outcome may have influenced these decisions

Moral judgment presents a unique challenge for neuroscientists. Rather than emerging from dedicated neural circuitry, it co-opts general-purpose systems designed for processing emotion, representing others’ mental states, and forecasting the possible consequences of our actions (Young & Dungan, 2012). But, despite the lack of a “moral organ,” human moral judgment is sophisticated, sensitive to multiple inputs — such as the psychological distance between ourselves and others — and subject to multiple influences, such as the mood we happen to be in. Although it may be a popular meme today, the trolley problem and other sacrificial dilemmas have inspired an enormously generative paradigm of research that has shed light on many of the processes underlying moral judgment.

~~~

Written by Shiri Spitz Siddiqi
Illustrated by Kayla Lim
Edited by Talia Oughourlian and Anastasiia Gryshyna

~~~

Become a Patron!

An abstract image of a person and ribbons in the area where their brain would be located

References

Abe, N., Greene, J. D., & Kiehl, K. A. (2018). Reduced engagement of the anterior cingulate cortex in the dishonest decision-making of incarcerated psychopaths. Social Cognitive and Affective Neuroscience, 13(8), 797–807.

Amit, E., & Greene, J. D. (2012). You see, the ends don’t justify the means: Visual imagery and moral judgment. Psychological Science, 23(8), 861–868. https://doi.org/10.1177/0956797611434965

Bartels, D. M., & Pizarro, D. A. (2011). The mismeasure of morals: Antisocial personality traits predict utilitarian responses to moral dilemmas. Cognition, 121(1), 154–161.

Bago, B., & De Neys, W. (2019). The intuitive greater good: Testing the corrective dual process model of moral cognition. Journal of Experimental Psychology: General, 148(10), 1782–1801. https://doi.org/10.1037/xge0000533

Buckner, R. L., Andrews‐Hanna, J. R., & Schacter, D. L. (2008). The brain’s default network: Anatomy, function, and relevance to disease. Annals of the New York Academy of Sciences, 1124(1), 1–38.

Ciaramelli, E., Muccioli, M., Làdavas, E., & Di Pellegrino, G. (2007). Selective deficit in personal moral judgment following damage to ventromedial prefrontal cortex. Social Cognitive and Affective Neuroscience, 2(2), 84–92.

Decety, J., Skelly, L. R., & Kiehl, K. A. (2013). Brain response to empathy-eliciting scenarios involving pain in incarcerated individuals with psychopathy. JAMA Psychiatry, 70(6), 638–645. https://doi.org/10.1001/jamapsychiatry.2013.27

Foot, P. (1967). The problem of abortion and the doctrine of the double effect. Oxford Review, 5.

Gawronski, B., Armstrong, J., Conway, P., Friesdorf, R., & Hütter, M. (2017). Consequences, norms, and generalized inaction in moral dilemmas: The CNI model of moral decision-making. Journal of Personality and Social Psychology, 113(3), 343–376. https://doi.org/10.1037/pspa0000086

Greene, J. (2013). Moral Tribes: Emotion. Reason, and the Gap Between Us and Them (NY: The Penguin Press, 2013).

Greene, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105–2108. https://doi.org/10.1126/science.1062872

Greene, J. D. (2007). Why are VMPFC patients more utilitarian? A dual-process theory of moral judgment explains. Trends in Cognitive Sciences, 11(8), 322–323. https://doi.org/10.1016/j.tics.2007.06.004

Greene, J. D., Morelli, S. A., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2008). Cognitive load selectively interferes with utilitarian moral judgment. Cognition, 107(3), 1144–1154.

Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44(2), 389–400.

Greene, J. D., & Young, L. (2020). The cognitive neuroscience of moral judgment and decision making. In M. S. Gazzaniga (Ed.), The Cognitive Neurosciences (Vol. 6). MIT Press.

Harenski, C. L., Harenski, K. A., Shane, M. S., & Kiehl, K. A. (2010). Aberrant neural processing of moral violations in criminal psychopaths. Journal of Abnormal Psychology, 119(4), 863-874.

Hauser, M., Cushman, F., Young, L., Kang-Xing Jin, R., & Mikhail, J. (2007). A dissociation between moral judgments and justifications. Mind & Language, 22(1), 1–21. https://doi.org/10.1111/j.1468-0017.2006.00297.x

Kahane, G., Everett, J. A. C., Earp, B. D., Farias, M., & Savulescu, J. (2015). ‘Utilitarian’ judgments in sacrificial moral dilemmas do not reflect impartial concern for the greater good. Cognition, 134, 193–209. https://doi.org/10.1016/j.cognition.2014.10.005

Kahneman, D. (2003). A perspective on judgment and choice: Mapping bounded rationality. American Psychologist, 58(9), 697-720.

Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., & Damasio, A. (2007). Damage to the prefrontal cortex increases utilitarian moral judgements. Nature, 446(7138), 908–911.

Mendez, M. F., Anderson, E., & Shapira, J. S. (2005). An investigation of moral judgement in frontotemporal dementia. Cognitive and Behavioral Neurology, 18(4), 193–197.

Patil, I. (2015). Trait psychopathy and utilitarian moral judgement: The mediating role of action aversion. Journal of Cognitive Psychology, 27(3), 349–366.

Raichle, M. E. (2015). The brain’s default mode network. Annual Review of Neuroscience, 38, 433–447.

Sormaz, M., Murphy, C., Wang, H., Hymers, M., Karapanagiotidis, T., Poerio, G., Margulies, D. S., Jefferies, E., & Smallwood, J. (2018). Default mode network can support the level of detail in experience during active task states. Proceedings of the National Academy of Sciences, 115(37), 9318–9323. https://doi.org/10.1073/pnas.1721259115

Thomas, B. C., Croft, K. E., & Tranel, D. (2011). Harming kin to save strangers: Further evidence for abnormally utilitarian moral judgments after ventromedial prefrontal damage. Journal of Cognitive Neuroscience, 23(9), 2186–2196.

Thomson, J. J. (1985). The trolley problem. Yale Law Journal, 1395–1415.

Valdesolo, P., & DeSteno, D. (2006). Manipulations of emotional context shape moral judgment. Psychological Science, 17(6), 476-477.

Young, L., & Dungan, J. (2012). Where in the brain is morality? Everywhere and maybe nowhere. Social Neuroscience, 7(1), 1-10. https://doi.org/10.1080/17470919.2011.569146

Author

  • Shiri Spitz Siddiqi

    Shiri Spitz Siddiqi is a PhD student at the University of California, Irvine, working with Dr. Pete Ditto and Dr. Pia Dietze. Her research focuses on "culture war" issues (for example, cultural appropriation, diversity, and free speech), both in terms of how people make judgments about them and in terms of their consequences for democracy. An avid collaborator, Shiri is also an affiliate with the Center for the Science of Moral Understanding. In her free time, she enjoys knitting, listening to podcasts, playing the cello, and hanging out with her cat, Scout. Shiri received her BS in Psychology and BA in Linguistics from the University of Texas at Austin in 2018. Before beginning her PhD program, she managed the EEG Lab in the Department of Psychological and Brain Sciences at Colgate University.

Shiri Spitz Siddiqi

Shiri Spitz Siddiqi is a PhD student at the University of California, Irvine, working with Dr. Pete Ditto and Dr. Pia Dietze. Her research focuses on "culture war" issues (for example, cultural appropriation, diversity, and free speech), both in terms of how people make judgments about them and in terms of their consequences for democracy. An avid collaborator, Shiri is also an affiliate with the Center for the Science of Moral Understanding. In her free time, she enjoys knitting, listening to podcasts, playing the cello, and hanging out with her cat, Scout. Shiri received her BS in Psychology and BA in Linguistics from the University of Texas at Austin in 2018. Before beginning her PhD program, she managed the EEG Lab in the Department of Psychological and Brain Sciences at Colgate University.