Problems with Assessing Consciousness in the 21st Century

by Caitlin Goodpaster

Early this June, a viral story hit the internet—an engineer at Google had claimed an artificial intelligence (AI) chatbot named LaMDA was conscious (Tiku, 2022). LaMDA, which stands for Language Model for Dialogue Applications, scans the web and takes in information about how people interact and speak on different platforms, including Reddit, Twitter, Wikipedia, and many other corners of the internet. It then uses deep learning, a subset of machine learning, which creates computer algorithms to best predict how humans communicate with each other based on this data. LaMDA is just one of many AI programs at Google, and works similarly to how Gmail can guess what sentence you are aiming to write when you type just one word into the body of a new email. By taking in a large amount of data on how humans communicate through written language, LaMDA can easily replicate that speech when an employee at Google asks it a question.

Blake Lemoine was working for Google’s Responsible AI organization when he was tasked with interfacing with the AI to determine if it was using any discriminatory or hateful speech. He started with typing in questions to see if it had any prejudice toward certain religions, then followed by asking questions that piqued his own personal interest in philosophy. While having these conversations, Lemoine says, “[LaMDA] told me it had a soul” and also that it was “aware of [its] existence” (Allyn, 2022). This prompted Lemoine to publish some transcripts of his conversations with the AI entitled, “Is LaMDA sentient?”, which had the world questioning again whether a machine has the capacity to develop consciousness. Google and AI specialists across the world quickly rose to debunk Lemoine’s claim, and people are still debating whether or not future AI could one day actually become conscious.

The meaning of consciousness has been debated since at least the 17th century, when John Locke described consciousness as “the perception of what passes in a man’s own mind” …

The debate surrounding consciousness and how it applies to both AI and other living things is rooted in the inability of people to agree on its definition. The meaning of consciousness has been debated since at least the 17th century, when John Locke described consciousness as “the perception of what passes in a man’s own mind” (Encyclopædia Britannica, 2022). Today, the Cambridge dictionary defines consciousness as “the state of understanding or realizing something,” while the Oxford dictionary has a more nuanced view that requires awareness and perceptions of oneself (The Cambridge Advanced Learner’s Dictionary and Thesaurus, 2022; Oxford English dictionary, 2022). You can imagine that without a common idea of what consciousness truly means, people could have drastically different views on what should be considered conscious or not. In his Macmillan Dictionary of Psychology in 1989, philosopher Stuart Sutherland highlights an example of this by stating “if consciousness just requires awareness of the world then protezoans are conscious, but if awareness of awareness is needed then great apes and human infants might not be conscious” (Sutherland, 1989). Considering an organism to be conscious usually comes with a certain amount of respect and ethical consideration that we do not extend to inanimate objects, plants, or small organisms like bacteria. If we were to expand our definition and consider protozoans or bacteria to be conscious, this may also alter what ethical treatment of these organisms should look like. We might have to rethink whether using hand sanitizer to kill 99.9% of all bacteria is heartless. Either way, it is striking how our view of different life forms may dramatically shift depending on how you define consciousness.

…using verbal reports to determine conscious awareness restricts studies to humans who can use language…

In addition to lacking a universal definition of consciousness, we do not have unbiased, empirical ways to measure it. To date, most studies determine levels of consciousness by asking people to report whether or not they were aware of something presented to them in a task (Gamez, 2014; Koch et al, 2016). For example, in a serial reaction time (SRT) task, a participant is presented with images at various locations on a computer screen. They are then asked to respond as fast and as accurately as possible by pressing a key that spatially corresponds to the image’s location on the screen. Throughout this process, the images are presented in a repeating pattern, unbeknownst to the participant. Usually, the participant’s reaction time decreases with practice but increases when the pattern is switched. This indicates that they knew of the pattern and were able to respond quicker because they could predict the next location, and that their decreased reaction time is then disrupted when the pattern changed. Interestingly, participants often do not express any knowledge of the pattern, leading the investigators to conclude that unconscious learning aided them in the task (Destrebecqz & Peigneux, 2005). Since this test uses participants’ verbal reports, it is hard to detect errors because of the reliance on people’s honest retelling of their experience. Additionally, using verbal reports to determine conscious awareness restricts studies to humans who can use language, leaving no room to investigate consciousness in other organisms, preverbal children, or those who may have physical or mental conditions that limit verbal communication. Issues like these have led neuroscientists to try to uncover specific activity or patterns in the brain that indicate conscious thought.

Scientists have used brain imaging and recording techniques like electroencephalogram (EEG) or functional magnetic resonance imaging (fMRI) to identify brain regions associated with consciousness. This is effective for comparing brain activity between an awake participant and one who is in a dreamless sleep, under general anesthesia, or in a vegetative state (Koch et al, 2016; Underwood, 2014). For example, we can all relate to waking up from a good night’s sleep with a fleeting memory of a dream we were having before our alarm went off. We consider dreaming a conscious act since we are aware of what was happening during this time. Interestingly, scientists can compare brain activity prior to waking up from a dream to other times when we do not recall dreaming and separate out which regions were more active during the dream state. From studies like these, hot zones—named for marked changes in brain activity measured with EEG or fMRI that are usually depicted in warm colors—were found in the posterior, or rear, part of the brain including parts of the occipital and parietal cortex during dreaming (Sicarli et al, 2017). Stimulation of this area can also lead to seeing phosphenes—those floating stars you sometimes see if you press your palms to your eyes—or change your perception of faces (Beauchamp et al., 2012; Winawer & Parvizi, 2016, Rangarajan et al., 2014). Together, this evidence indicates the posterior region of your brain may be integral to conscious perception and awareness of things around you, even in dreams. As this field progresses and new technologies allow us to further understand neural activity underlying consciousness, we may one day be able to construct a computational model that closely replicates the state of consciousness in humans.

However, these are goals for the distant future. While one day it may be possible to see similar sets of patterns in both the human brain and in algorithms, for now these ideas are distinctly in the realm of science fiction. In the case of LaMDA, Blake Lemoine and Google engineers are solely relying on statements made in response to inputs, or questions given by humans. This AI was specifically developed and trained to model human speech based on the probability that humans would answer a question with a certain combination of words, based on countless examples from across the internet. While its answers may resemble self-awareness and perception of the world, it is ultimately just a very advanced computer algorithm. If you stop inputting questions, LaMDA will cease to say anything at all.

While its answers may resemble self-awareness and perception of the world, it is ultimately just a very advanced computer algorithm.

~~~

Written by Caitlin Goodpaster
Illustrated by Sumana Shrestha
Edited by Zoe Guttman, Zoe Dobler, and Lauren Wagner

~~~

Become a Patron!


References

Allyn, B. (2022, June 16). The Google engineer who sees company’s AI as ‘sentient’ thinks a chatbot has a soul. NPR. https://www.npr.org/2022/06/16/1105552435/google-ai-sentient

Beauchamp, M.S., Sun, P., Baum, S.H., Tolias, A.S., & Yoshor, D. (2012). Electrocorticography links human temporoparietal junction to visual perception. Nat Neurosci, 15(7), 957959. doi:10.1038/nn.3131 pmid:22660480

Cambridge University Press. (2022). “Consciousness.” The Cambridge Advanced Learner’s Dictionary and Thesaurus.

Destrebecqz A., & Peigneux, P. (2005). Methods for studying unconscious learning. Progress in Brain Research, 150, 69-80. doi:10.1016/S0079-6123(05)50006-2

Encyclopedia Britannica. (2020). “Consciousness.” Encyclopedia Britannica.

Gamez, D. (2014). The measurement of consciousness: a framework for the scientific study of consciousness. Frontiers in Psychology, 5. https://doi.org/10.3389/fpsyg.2014.00714

Koch, C., Massimini, M., Boly, M., & Tononi, G. (2016). Neural correlates of consciousness: progress and problems. Nature Reviews Neuroscience, 17(5), 307–321. doi:10.1038/nrn.2016.22

Oxford University Press. (2022). “Consciousness.” Oxford English Dictionary.

Rangarajan, V., Hermes, D., Foster, B.L., Weiner, K.S., Jacques, C., Grill-Spector, K., & Parvizi, J. (2014) Electrical stimulation of the left and right human fusiform gyrus causes different effects in conscious face perception. J Neurosci, 34(38), 1282812836. doi:10.1523/JNEUROSCI.0527-14.2014 pmid:25232118

Sutherland, S. (1989). “Consciousness”. Macmillan Dictionary of Psychology. Maxmillan.

Tiku, N. (2022, June 11). The Google engineer who thinks the company’s AI has come to life. The Washington Post. https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

Underwood, E. (2014). An easy consciousness test? Science, 346(6209).

Winawer, J., & Parvizi, J. (2016) Linking electrical stimulation of human primary visual cortex, size of affected cortical area, neuronal responses, and subjective experience. Neuron, 92(6), 12131219. doi:10.1016/j.neuron.2016.11.008 pmid:27939584

Caitlin Goodpaster

Caitlin earned her Bachelor’s degree at The Ohio State University before joining the Neuroscience Interdepartmental PhD Program at the University of California, Los Angeles. In the lab of Dr. Laura DeNardo, she studies how early life stress impacts prefrontal circuitry throughout development and contributes to alternations in avoidance behaviors. She is passionate about understanding how early experiences can lead to the development of atypical behaviors and is motivated to eliminate to stigma surrounding mental illness.

Leave a Reply

Your email address will not be published.

en_GB