By Lupita Valencia
Evolution has made us experts at recognizing faces. We are innately social creatures and are hard-wired to detect faces in the environment. From the moment we are born, we are gazing at our caregiver’s faces and spend a majority of our time looking at other people’s faces as they talk and interact with us. While faces typically share the same parts, such as eyes, noses and mouths, these features can vary in size, shape, and color. However, our brains are able to recognize them as a single face rather than a collection of distinct parts. Learning to recognize individuals and their expressions can help us detect potential threats from others. In addition, facial expressions are essential to our communication and provide information that words often cannot, as made evident by the widespread use of emojis in text.
From the moment we are born, we are gazing at our caregiver’s faces and spend a majority of our time looking at other people’s faces as they talk and interact with us
Our natural attraction to faces makes us prioritize them over other features as they are rich sources of social information (Bindemann et al., 2005). Despite minimal exposure to the world, newborns show a preference for upright face-like images, even if they are simply patterns that have more elements in the upper part (Turati et al., 2002). This is likely attributed to the fact that we have more features located on the top half of the face compared to the bottom half. Even before birth, babies show a preference that is quite similar. Thanks to advancements in 4D ultrasound technology, which reproduces “live” video, we can now study this inside the womb by shining light to reflect visual information. Researchers found that human fetuses in the third trimester of pregnancy tend to turn their heads toward patterns that resemble upright faces, but don’t display this behavior when the face-like patterns are inverted (Reid et al., 2017). However, research in the area of face recognition has led to mixed results. Given that young children develop so quickly it is difficult to pinpoint exactly what is guiding this preference. What is certain though is that our ability to detect faces develops very early on and is an important facet of our lives.
The reason we are so good at recognizing and focusing on faces is that the human brain has become primed to detect them. In fact, there are distinct areas that are responsible for detecting and processing faces such as the Occipital Face Area (OFA) and Fusiform Face Area (FFA). The occipital cortex, located in the back of the brain, plays a crucial role in processing visual information more broadly. Located within this region, the OFA takes part in the initial phases of face processing, where it identifies parts of faces such as the eyes, nose or mouth (Pitcher et al., 2011). The fusiform gyrus, located on the underside of the brain, is associated with the function of recognizing faces and words. Here, the FFA sums up all this visual information into one unit, allowing us to detect a face even in a photograph or drawing. The FFA also lets us distinguish between different individuals by associating unique facial characteristics with that person (Kanwisher and Yovel, 2006). Consequently, individuals who have brain damage in the FFA have difficulty recognizing faces, a condition known as prosopagnosia or face blindness. Our ability to perceive faces even has a neural signature, termed N170, first discovered at the Hebrew University in Israel (Bentin et al., 1996). N170 is a wave of electrical activity that occurs less than a second after viewing a face. Put simply, the N170 is a signal that our brain produces in response to seeing a face to help us make sense of our surroundings. In essence, the intricate network within our brains, along with the remarkable N170 neural signature, illustrates the specialized machinery behind our ability to recognize faces.
The reason we are so good at recognizing and focusing on faces is that the human brain has become primed to detect them
We are so primed to recognize faces that we are convinced we see them when they aren’t really there, a term called face pareidolia (Caruana and Seymour, 2022). The word “pareidolia” traces back to Greek which means “beyond form or image” and refers to interpreting hidden meanings or patterns when there aren’t any. There are countless examples of where we find these in our daily life: whether finding faces while cloud gazing or peering at an oddly shaped fruit.
Figure 1. Examples of pareidolia in everyday life.
Given our exceptional ability to detect faces even in the absence of one, it’s interesting to explore how our brains respond to entities that fall short of our expectations. As artificial intelligence advances rapidly so does the production of robots, including ones that are made to look more like us. This has led to the concept of the uncanny valley, first introduced in an essay by Japanese roboticist Masashiro Mori (MacDorman and Kageki, 1970/2012). It refers to the uneasiness and discomfort felt when a character or robot gets slightly too close to appearing human, made more apparent by the slight imperfections. Mori included a graph with a steep dip to display the relationship between an object’s degree of human resemblance and our emotional response to it. Typically, our affinity increases the more it resembles a human up to a certain point, which is then followed by an abrupt drop off that turns into revulsion.
Figure 2. The graph depicts the uncanny valley, the proposed relation between the human likeness of an entity and the perceiver’s affinity for it. [Translators’ note: Bunraku is a traditional Japanese form of musical puppet theater dating from the 17th century. The puppets range in size but are typically about a meter in height, dressed in elaborate costumes, and controlled by three puppeteers obscured only by their black robes.]
While Mori’s essay is most famous for its graph, there was no evidence to support such a claim. It wasn’t until the essay was translated to English that researchers began to question if they could reproduce that effect in real life. A study tested this by showing participants morphed images that ranged from a robot through an android to a human (MacDorman and Ishiguro, 2006). However, a later study compared two approaches and found that the effect was clearly present when using real photos of robots that look very similar to humans and computer-generated images, but not when using the morphing technique (Palomäki et al., 2015). These findings indicate that the degree of human likeness perceived in robots is dependent on the method by which the images are presented. Beyond the laboratory, the film industry has tapped into the allure (or perhaps aversion) of the uncanny valley effect to capture our attention. Films like “The Polar Express”,“Ex Machina” and the new 2023 release of “M3GAN” are prime examples of the Uncanny Valley in cinema. Horror films in particular leverage the ambiguous territory of what is considered human to amplify creepiness. While there are competing theories on why this eariness emerges to begin with, it reminds us that our perception filters the way we see the world.
Recognizing faces is an example of how incredibly attuned our brains are to processing visual stimuli. Our innate ability to detect faces is a reflection of our human nature and holds evolutionary implications. Undoubtedly, advances in technology and research methods related to face perception will continue to inform our understanding of this phenomenon, and more broadly, how we make sense of and interact with the world.
Written by Lupita Valencia
Illustrated by Kayla Lim
Bentin, S., Allison, T., Puce, A., Perez, E., & McCarthy, G. (1996). Electrophysiological Studies of Face Perception in Humans. Journal of cognitive neuroscience, 8(6), 551–565. https://doi.org/10.1162/jocn.19220.127.116.111
Bindemann, M., Burton, A. M., Hooge, I. T., Jenkins, R., & de Haan, E. H. (2005). Faces retain attention. Psychonomic bulletin & review, 12(6), 1048–1053. https://doi.org/10.3758/bf03206442
Caruana, N., & Seymour, K. (2022). Objects that induce face pareidolia are prioritized by the visual system. British journal of psychology (London, England : 1953), 113(2), 496–507. https://doi.org/10.1111/bjop.12546
Kanwisher, N., & Yovel, G. (2006). The fusiform face area: a cortical region specialized for the perception of faces. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 361(1476), 2109–2128. https://doi.org/10.1098/rstb.2006.1934
MacDorman, K. F., & Ishiguro, H. (2006). The uncanny advantage of using androids in cognitive and social science research. Interaction Studies: Social Behaviour and Communication in Biological and Artificial Systems, 7(3), 297–337. https://doi.org/10.1075/is.7.3.03mac
Mori, M. (2012, June 12). The Uncanny Valley: The Original essay by Masahiro Mori (K. F. MacDorman & N. Kageki, Trans.). IEEE Spectrum. https://spectrum.ieee.org/the-uncanny-valley (original work published in 1970)
Palomäki, J., Kunnari, A., Drosinou, M., Koverola, M., Lehtonen, N., Halonen, J., Repo, M., & Laakasuo, M. (2018). Evaluating the replicability of the uncanny valley effect. Heliyon, 4(11), e00939. https://doi.org/10.1016/j.heliyon.2018.e00939
Pitcher, D., Walsh, V., & Duchaine, B. (2011). The role of the occipital face area in the cortical face perception network. Experimental brain research, 209(4), 481–493. https://doi.org/10.1007/s00221-011-2579-1
Reid, V. M., Dunn, K., Young, R. J., Amu, J., Donovan, T., & Reissland, N. (2017). The Human Fetus Preferentially Engages with Face-like Visual Stimuli. Current biology : CB, 27(12), 1825–1828.e3. https://doi.org/10.1016/j.cub.2017.05.044
Turati, C., Simion, F., Milani, I., & Umiltà, C. (2002). Newborns’ preference for faces: what is crucial?. Developmental psychology, 38(6), 875–882. https://doi.org/10.1037/0012-1618.104.22.1685