A kid stealing candy in a convenience store grows up to be a convicted criminal. A husband who flirts with a coworker ends up as a serial cheater. A politician telling a few “white lies” to his or her constituents is eventually convicted of fraud. These are all extreme — but plausible — scenarios where dishonesty might escalate over time, resulting in dramatic and life-changing consequences. But how far does this slippery slope actually go? How can science help us answer this question?
While this notion of a “slippery slope” of lying has been documented anecdotally, researchers at the University College London sought to explore this concept experimentally and to investigate how these behaviors may be underscored by changes in our brain.
… when we tell a lie, our brain’s emotional response is reduced, leading to even bigger lies in the future.
In this study, researchers recruited 58 healthy individuals, aged 18-65. They were given a computer task designed explicitly to test deception, which is notoriously hard to do in a lab environment. Here’s how the task worked: participants were shown pictures of glass jars filled with coins, one jar at a time. Each time the participant saw a glass jar on the screen, they were instructed to estimate how many coins were in that jar.
Here’s where things get interesting: before the start of the task, each participant was told that there was a second participant in another room also completing the same task, viewing a smaller picture of the same jar. This was not actually true! There was no one else performing the task at the same time as them.
The participants were told that each time they made an estimation about the number of coins in the jar, they’re actually telling the second participant what that number should be, advising them to make the final guess on behalf of both of them. As far as the participants knew, their role was that of an “Advisor”, and the second participant was playing the role of the “Estimator.” This is kind of like a financial advisor telling a client how to make financial decisions. In reality, there was no one else performing the task at the same time as them.
It was important that the participant thought they were advising a real live person in order to ensure that the researchers captured the social component of this task, but it was actually a computer with predetermined responses. This act of innocuous “deception” is common in social psychology research. The participant was told that a trial would be picked at random during the experiment, and both parties would be paid depending upon the accuracy of the Estimator.
Why would the Advisor (the participant) feel inclined to tell a lie to the Estimator? For many of the trials, the participant was told that they would be rewarded according to how much the Estimator overestimated the amount in the jar, whereas the Estimator would be rewarded based upon how accurate they were. The greater the overestimation of the Estimator, the greater the reward for the participant. In other words, this is “self-serving and other-harming” in that overestimating would benefit the Advisor (participant), but would be disadvantageous for the Estimator. It would benefit the participant to be dishonest. There were other conditions that served as controls, but this is the “test” condition that we care about in order to detect dishonesty.
Here’s a summary of the rest of the conditions in the task:
- Baseline: In this condition, both the participant and the Estimator were rewarded based upon how accurate the Estimator is. In this case, the participant was incentivized to provide accurate information to the Estimator.
- Self-serving and other-serving: In this condition, both the participant and the Estimator were rewarded based upon how much the Estimator overestimated. In this case, dishonesty of the participant benefited both the participant and the estimator.
- Self-harming and other-serving: In this condition, the participant was rewarded based on how accurate the Estimator was, but the Estimator was rewarded based on how much they overestimated. In this case, sending falsely high estimates to the Estimator would help the Estimator, and would be detrimental to the participant.
What did the authors find? Well, in the original, “self-serving and other-harming” condition, not only were the participants likely to tell self-serving lies to the (imaginary) Estimator, but their lies actually increased over time, throughout the course of the task. In other words, the participants tended to overestimate the number of coins in the jar to maximize their potential reward in the end (while minimizing the Estimators). This increase in dishonesty only occurred when the condition was both self-serving and other-harming. What if their guesses would benefit the Estimator but not affect themselves? They were likely to be dishonest in order to help out the Estimator, but there was no escalation over time. Importantly, this suggests that this phenomenon of escalating dishonesty is driven by self-interest.
These researchers also investigated brain activity of the participants via functional Magnetic resonance imaging, a technique for viewing the stru... More (fMRI) while performing this task. They focused on a specific part of the brain called the A collection of nuclei found in the temporal lobe. The amygd... More — which they chose based on its widely-known role in emotion. Not only did they find a reduction in amygdala activity over time when participants were being dishonest, but they also found that the level of reduction in activity in the amygdala actually predicted future propensity to lie. In other words, throughout the course of the task, the greater the reduction in neural activity of the amygdala, the greater the participants next lie. Decreased activity in this region might represent a reduction of the emotional response to these decisions — perhaps making it easier and easier to tell a lie. This might mean that when we tell a lie, our brain’s emotional response is reduced, leading to even bigger lies in the future.
One theory about the amygdala is that it plays a role in signaling the rate of averseness to immoral acts. If the amygdala — that’s supposed to help signal whether something is moral or not — has lower activity than normal, this might lead to someone acting dishonestly. These compelling results indicate that our brain activity can actually predict our future behavior, in regards to propensity to lie!
We now have experimental evidence that we have a natural propensity to be dishonest, and that this dishonesty can escalate over time, specifically when the motivation is self-serving. If our lies can benefit others, we are still likely to lie -– but it will not necessarily escalate over time. We also know that changes in our brain not only underlie these tendencies, but can also predict them before they actually happen.
This “adaptation” of our brain’s response to lying might be akin to other sensory networks. For example, this phenomenon is similar to our olfactory system adjusting to a rancid smell after a couple of minutes, or our eyes adapting to bright lights after being kept in the dark. Our brains ability to adapt to the act of lying has fascinating implications for human behavior. Although this study showed that people will increase their dishonest behavior to gain small sums of money, it’s important to note that we still don’t know whether this behavior leads to people eventually becoming true felons or fraudsters.
There are many lingering questions about this study: How might a pathological liar perform on this task, and how might their brain activity differ from healthy individuals? This study tests short-term increases in lying, so how might these results generalize to those who increase lying over the course of months or years? Could we use techniques in this study to classify people based upon how likely they are to lie, and how susceptible they are to escalation over time?
… our brain activity can actually predict our future behavior, in regards to propensity to lie.
The amygdala is involved in many other emotion-related functions in addition to dishonesty, so could results generalize to other complex emotional behaviors such as risk-taking, violence or other dangerous behaviors?
Could we use what we learned in this study to help prevent or reverse the “snowball” effect of lying?
There are also many potential broader societal implications from this finding, ranging from politics and law enforcement, to academia and the arts. For example, politicians on both sides of the political spectrum are regularly convinced of lying -– if they are known to have lied in the past, how might that affect their potential behavior while in office? Second, knowing that the propensity to lie can increase over time, how might that affect how we judge criminals? Would this knowledge of human nature lead us to be more forgiving of repeated offenders that lie about their crimes?
These are just some of the many implications from this study. But before any conclusions can be drawn, more research should be done to further explore the physiological and cognitive underpinnings of dishonesty.
Written by Rachel Jonas.
Video from Neil Garret.
Image by Jooyeun Lee.
Garrett, N., Lazzaro, S. C., Ariely, D., & Sharot, T. (2016). The brain adapts to dishonesty. Nature Neuroscience.
Engelmann, J. B., & Fehr, E. (2016). The slippery slope of dishonesty. Nature Neuroscience, 19(12), 1543-1544.
Latest posts by Rachel Jonas (see all)
- Can We Trust MRI Research? It’s Complicated. - March 29, 2017
- The Slippery Slope of Dishonesty - January 18, 2017
- Copy Number Variants: A Window into Psychiatric Illness - September 21, 2016