Will Artificial Intelligence Fall in Love?
By Gabrielle Torre
In a few months, millions of viewers will return to Westworld, the HBO original series about artificially intelligent androids inhabiting a Western theme park. The show, based on a 1973 film written by the late Michael Crichton, depicts the highs and lows of human nature through stirring plotlines built around rich vacationers, who pay to enter the titular theme park and live out their fantasies.
Westworld is a classic simulacrum. It represents the Wild West: a setting where humanity’s wiles can run rampant without consequence, where androids live a recursive cycle of storylines without knowledge of their automated makeup. Westworld is thus a stand-in for reality, unknown to the android hosts—but not for long. The show’s questions (Who is android? Who is not?) are posited early in the first episode when Dolores, one of the central android protagonists, is questioned by a park technician about her self-awareness.
“Do intelligent robots need emotion?”
“Have you ever questioned the nature of your reality?” her creator asks. “No,” Dolores responds, in a perfectly programmed Western drawl.
We spend much of the season rooting for Dolores as she grapples with sensations stranger to her android self. Yet, we also spend this portion of the show in a sort of fear, as we realize that Dolores’ growing sentiments also belie her increasing sentience and intelligence. Dolores’ ascent towards conscious intelligence is marked by a growth in emotional capacity. After witnessing the deaths of fellow androids, Dolores quotes lines on pain and loss. “Did we write that for you?” her creator asks. She says no, adding, “I adapted it from a scripted dialogue about love.” And with that, we learn that Dolores is approaching human intelligence.
When we think about intelligence, we tend to focus on components like verbal reasoning, planning, perceptual skills, or other cognitive capabilities. Emotion has long been either ignored entirely or included in conceptions of intelligence as an afterthought. Perhaps, Westworld suggests, we should also consider emotion.
A recent article in the journal Trends in Cognitive Neurosciences posed this provocative thought in its title: Do intelligent robots need emotion? Dr. Luiz Pessoa of University of Maryland, College Park argues that, yes, advanced emotional capacity is necessary for theoretical frameworks of intelligence. Further, he argues that emotion must be not just included but integrated with cognition in order to make a truly intelligent robot.
Pessoa describes an updated version of the Turing Test, a classic adage of computer science. It’s simple: Can a machine manifest intelligent behavior that is mistakable for human intelligence? The Test has seen its pop culture hey day (see one of our past articles about Philip K. Dick’s classic novel, Do Androids Dream of Electric Sheep?), but rarely does science fiction so directly influence scientific inquiry as in Pessoa’s article. He calls it the Dolores Test—a not-so-subtle nod to Westworld’s favorite robot. The Dolores Test involves humans and androids interacting and behaving together; both would be confused with each other’s makeup if—and only if—emotion were integrated with cognition in the androids.
To fully understand Pessoa’s hypothesis, we must retreat from Westworld and ask, “What is intelligence?” Science itself is unsure how to measure this construct. Measures range from intelligence quotient (IQ), a composite measure of verbal and perceptual abilities, to the g factor, a broader construct that summarizes an individual’s correlated performances on many cognitive tasks. The jury is out on what exactly in the brain contributes to intelligence. More computationally-based studies of intelligence now grapple with how to structure the cognitive functions that would enable intelligence. Cognitive functions are skills like verbal reasoning, learning ability, or working memory. Thus, the components of intelligence, how to measure intelligence, and how to replicate intelligence in intelligent robots remains unclear.
“Perhaps it’s the emotional understanding of Shakespeare that indicates just how smart these robots are.”
Classic tenets of intelligence, such as language, are shown throughout Westworld to be of serious concern to the androids’ creators and caretakers. The park’s androids are programmed to have sophisticated linguistic capabilities, including contextual improvisation and occasionally even quoting Shakespeare, but—returning to Pessoa’s idea—perhaps it’s the emotional understanding of Shakespeare that indicates just how smart these robots are. Quoting Shakespeare in a vacuum is fairly simple, as these things go. Understanding the meaning behind the Bard’s words and adapting them in response to external stimuli to express emotion exhibits a much greater intelligence at work.
Have we not yet approached true artificial intelligence because our models of the brain separate “feelings” from things like attention, problem solving, and planning? Pessoa’s article points out that in the brain, emotional substrates (e.g. the amygdala) are strongly interconnected with cognitive substrates (e.g. the frontal lobe). These are called cortical-subcortical networks: loops that twine the brain’s more primitive emotional structures to its more recently evolved cognitive machinery. Subcortical structures of the brain may provide non-conscious inputs to the loop’s more sophisticated cortical structures, where a conscious emotion is assembled. These cortical-subcortical networks create a system that—in healthy, intelligent humans—seamlessly allows each system to inform the other. Emotions influence how we attend, how we solve problems, and how we plan… whether we like it or not. By applying this idea to actual machines, removing an emotional module would cripple a robot’s intelligence.
Where might this idea fall short for artificial intelligence? The neural substrates of emotion are well-delineated but structurally complex. To build a machine that
In a few months, millions of viewers will return to Westworld, the HBO original series about artificially intelligent androids inhabiting a Western theme park. The show, based on a 1973 film written by the late Michael Crichton, depicts the highs and lows of human nature through stirring plotlines built around rich vacationers, who pay to enter the titular theme park and live out their fantasies.
Westworld is a classic simulacrum. It represents the Wild West: a setting where humanity’s wiles can run rampant without consequence, where androids live a recursive cycle of storylines without knowledge of their automated makeup. Westworld is thus a stand-in for reality, unknown to the android hosts—but not for long. The show’s questions (Who is android? Who is not?) are posited early in the first episode when Dolores, one of the central android protagonists, is questioned by a park technician about her self-awareness.
“Do intelligent robots need emotion?”
“Have you ever questioned the nature of your reality?” her creator asks. “No,” Dolores responds, in a perfectly programmed Western drawl.
We spend much of the season rooting for Dolores as she grapples with sensations stranger to her android self. Yet, we also spend this portion of the show in a sort of fear, as we realize that Dolores’ growing sentiments also belie her increasing sentience and intelligence. Dolores’ ascent towards conscious intelligence is marked by a growth in emotional capacity. After witnessing the deaths of fellow androids, Dolores quotes lines on pain and loss. “Did we write that for you?” her creator asks. She says no, adding, “I adapted it from a scripted dialogue about love.” And with that, we learn that Dolores is approaching human intelligence.
When we think about intelligence, we tend to focus on components like verbal reasoning, planning, perceptual skills, or other cognitive capabilities. Emotion has long been either ignored entirely or included in conceptions of intelligence as an afterthought. Perhaps, Westworld suggests, we should also consider emotion.
A recent article in the journal Trends in Cognitive Neurosciences posed this provocative thought in its title: Do intelligent robots need emotion? Dr. Luiz Pessoa of University of Maryland, College Park argues that, yes, advanced emotional capacity is necessary for theoretical frameworks of intelligence. Further, he argues that emotion must be not just included but integrated with cognition in order to make a truly intelligent robot.
Pessoa describes an updated version of the Turing Test, a classic adage of computer science. It’s simple: Can a machine manifest intelligent behavior that is mistakable for human intelligence? The Test has seen its pop culture hey day (see one of our past articles about Philip K. Dick’s classic novel, Do Androids Dream of Electric Sheep?), but rarely does science fiction so directly influence scientific inquiry as in Pessoa’s article. He calls it the Dolores Test—a not-so-subtle nod to Westworld’s favorite robot. The Dolores Test involves humans and androids interacting and behaving together; both would be confused with each other’s makeup if—and only if—emotion were integrated with cognition in the androids.
To fully understand Pessoa’s hypothesis, we must retreat from Westworld and ask, “What is intelligence?” Science itself is unsure how to measure this construct. Measures range from intelligence quotient (IQ), a composite measure of verbal and perceptual abilities, to the g factor, a broader construct that summarizes an individual’s correlated performances on many cognitive tasks. The jury is out on what exactly in the brain contributes to intelligence. More computationally-based studies of intelligence now grapple with how to structure the cognitive functions that would enable intelligence. Cognitive functions are skills like verbal reasoning, learning ability, or working memory. Thus, the components of intelligence, how to measure intelligence, and how to replicate intelligence in intelligent robots remains unclear.
“Perhaps it’s the emotional understanding of Shakespeare that indicates just how smart these robots are.”
Classic tenets of intelligence, such as language, are shown throughout Westworld to be of serious concern to the androids’ creators and caretakers. The park’s androids are programmed to have sophisticated linguistic capabilities, including contextual improvisation and occasionally even quoting Shakespeare, but—returning to Pessoa’s idea—perhaps it’s the emotional understanding of Shakespeare that indicates just how smart these robots are. Quoting Shakespeare in a vacuum is fairly simple, as these things go. Understanding the meaning behind the Bard’s words and adapting them in response to external stimuli to express emotion exhibits a much greater intelligence at work.
Have we not yet approached true artificial intelligence because our models of the brain separate “feelings” from things like attention, problem solving, and planning? Pessoa’s article points out that in the brain, emotional substrates (e.g. the amygdala) are strongly interconnected with cognitive substrates (e.g. the frontal lobe). These are called cortical-subcortical networks: loops that twine the brain’s more primitive emotional structures to its more recently evolved cognitive machinery. Subcortical structures of the brain may provide non-conscious inputs to the loop’s more sophisticated cortical structures, where a conscious emotion is assembled. These cortical-subcortical networks create a system that—in healthy, intelligent humans—seamlessly allows each system to inform the other. Emotions influence how we attend, how we solve problems, and how we plan… whether we like it or not. By applying this idea to actual machines, removing an emotional module would cripple a robot’s intelligence.
Where might this idea fall short for artificial intelligence? The neural substrates of emotion are well-delineated but structurally complex. To build a machine that mimics our biological machinery could prove impossible. Or, perhaps despite Pessoa’s assertion, an integrated emotion-cognition module isn’t necessary for future androids to resemble humanity. Maybe artificial intelligence doesn’t need to know love.
Regardless, future research might take notes from Westworld, or other science fiction favorites, to inspire hypotheses on what simulating human intelligence might require. One day, the android simulacrum may not be a simulacrum after all.
For our review of the Turing Test, click here!
References:
LeDoux, J.E. and Brown, R. (2017). A higher-order theory of emotional consciousness. PNAS.
Pessoa, L. (2016) Do intelligent robots need emotion? Trends in Cognitive Neurosciences.
Turing, A. (1950) Computing Machinery and Intelligence. Mind.
