Machine Yearning: The Rise of Thoughtful Machines

In the mid twentieth century, artificial intelligence researchers invented a new type of computational system that could detect patterns in images – a daunting task for previous technology. Because this new system comprised highly interconnected information-processing nodes, resembling the organization and function of the brain, it became known as an artificial neural network.

At that time, neuroscience was still in its infancy, and our understanding of the brain was limited. Scientists knew that neurons could pass signals to other neurons. They had some idea that the connections between neurons were flexible, and that connection strengths could change. And by peering at cells through a microscope it was easy to extrapolate that the total number of neuronal connections in the brain was astronomical. But basic information about the brain’s operation was still mysterious. Nobody had a clue how the human brain’s 89 billion neurons were subdivided into functional groups, how electrochemical fluctuations encoded information, or how neural circuits processed electrical signals. Thus, the similarity between artificial neural networks and biological neural networks didn’t extend very far.

At least, it didn’t initially.

“… the researchers designing systems are also desperately trying to understand how they work.”

Today, neural networks resemble biological brains more vividly. These artificial systems can perform complicated tasks with surprising intelligence: Researchers are currently developing systems that can learn how to drive a car just by observing a human driver, or that can cooperate seamlessly with humans to solve problems jointly. And the secret to the performance of these advanced neural nets is a complex and inscrutable system of connections buried in so-called hidden layers. The more hidden layers a deep learning neural network has, the more remarkable its problem-solving ability – and the less anyone can understand how it’s working.

 

Hence, we have reached a peculiar stage in the history of technology wherein the researchers designing systems are also desperately trying to understand how they work.

neural network schematic
A neural network schematic, showing hidden layers of interconnected information-processing nodes

To investigate the intricate computation occurring deep inside neural nets that classify images, one strategy involves systematically feeding the network different images and singling out one hidden node at a time to find out what image properties cause that node to activate. In a neural net that can identify cupcakes in photos, there might be a hidden node that responds to blue stripes angled at 45 degrees. Or, there might be a node that responds to pink frosting in the center of the frame. By discovering the image properties uniquely recognized by each of many hidden nodes, researchers can start to piece together the function of the hidden layers, and how the composition of these layers can decode information about the image – from pixel to cupcake.

“Neural networks are more aptly named than their inventors ever realized.”

This same strategy is a staple of neuroscientific research. Foundational studies of the brain’s visual system homed in on the precise properties of light and the visual field that activated specific neurons in different regions of the brain. With this method, neuroscientists learned that there are numerous brain areas in the visual system that each respond to different aspects of visual images – some neurons encode the region of space that a visual stimulus inhabits, some neurons encode colors, and other neurons encode more complex properties like object identity. And now that these neurons’ functional properties are clear, neuroscientists are able to form theories about how different visual areas connect, work together to decipher visual information, and distribute this information throughout the rest of the brain.

It seems then that neural networks are more aptly named than their inventors ever realized. Neural network researchers are using a strategy to study their creations identical to one neuroscientists use to study the brain, which leads to some thought-provoking speculation: What other neuroscientific research methods could be useful for studying neural networks?

It’s possible to imagine how fMRI, tractography, optogenetics, or event-related potential techniques could be tailored to the study of artificial neural networks. In neuroscience, these popular and powerful methods each capture a different type of data, and so can be used to test different types of hypotheses. The brain is too complex to ever yield complete knowledge of every neuron’s activity at every moment in time, so research questions focus on specific aspects of neural operation: the location of activity in the brain, whether a type of cell is necessary for some behavior, or the time course of a specific neural process. Then, findings from different research programs can be compared and woven together to form a theoretical understanding of how the brain works. This same broad strategy could be applied to the study of artificial neural networks, the ever-increasing complexity of which also thwarts detailed mechanistic understanding.

If advanced neural networks can be directed to analyze their own functioning, would that change how we view ourselves?

If we extrapolate further, to the bleeding edge of neuroscience, we tread into the realm of science fiction. NeuroimagingTechniques for viewing the brain and its activity, especiall... More technologies have been steadily advancing, but the most methodological progress is being made in data analysis. Using the same fMRI data that has been available for decades, neuroscientists are now devising sophisticated statistical tools to answer new questions that were once thought to be unapproachable. Many of these advanced analytical tools, such as multi-voxel pattern analysis, support vector machines, and representational similarity analysis are machine learning applications – they are powered by the same technology that drives artificial neural networks. So, if researchers studying artificial neural networks find success in the adaptation of neuroscience methods to their own work, their efforts might eventually include these recent machine learning applications, at which point neural networks would be deployed in the analysis of themselves.

Introspection, the capacity to gaze inward and reflect on the very mental processes that underlie our inquisitiveness, is often considered to be a defining trait of humanity that sets us apart from other animals. But if advanced neural networks can be directed to analyze their own functioning, would that change how we view ourselves? Would artificially intelligent systems need to be recognized on equal standing with us? Or would we simply need to strike one possible essentially human trait off of the ledger of human nature?

Before we start worrying about losing our unique place in the universe, we can take some small comfort in one likely scenario. Namely, it’s possible that self-reflective neural networks would be more successful in deciphering their functioning than we are as humans. As the great American psychologist William James described, our introspection is “like seizing a spinning top to catch its motion, or trying to turn up the gas quickly enough to see the darkness.” In other words, we have the capacity for introspection, but true introspective understanding is elusive. So our uniqueness would then be preserved: In the club of ineffectual self-reflection, we could still be the sole members.

Machine Yearning - Image by Sean Noah
Machine Yearning – Image by Sean Noah

References

• The Dark Secret at the Heart of AI. Will Knight, MIT Technology Review. 4/11/2017 https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/
• Hubel D.H., Wiesel T.N. (1962) Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of Physiology 160(1):106-154.
• Vogt, N. (2018) Machine learning in neuroscience. Nature Methods 15, 33 (2018)
• James, W. (1890) The principles of psychology. NY, US: Henry Holt and Company.

Sean Noah

Sean Noah

Sean is a PhD student at UC Davis, studying the neural mechanisms of attention. Previously, he studied cognitive science and neurobiology at UC Berkeley, and then worked for Think Now, Inc., studying attention with EEGElectroencephalogram, a technique that places electrodes on ... More. He is deeply interested in artificial intelligence, the nature of information in the brain, and the relationship between consciousness and attention. He also loves reading, writing, eating, and gardening.
Sean Noah

Sean Noah

Sean is a PhD student at UC Davis, studying the neural mechanisms of attention. Previously, he studied cognitive science and neurobiology at UC Berkeley, and then worked for Think Now, Inc., studying attention with EEG. He is deeply interested in artificial intelligence, the nature of information in the brain, and the relationship between consciousness and attention. He also loves reading, writing, eating, and gardening.

One thought on “Machine Yearning: The Rise of Thoughtful Machines

  • April 11, 2018 at 12:18 pm
    Permalink

    In computer programming we write our algorithms to perform specific tasks. So, we begin with the programmer’s explanation of how the inner workings actually work to produce the desired result. And when things don’t work as expected, as a last resort, we run a “trace”. The trace is extra code added to each routine that reports the value of the variables at the start of the routine and the values at the end.

    Theoretically, code could be added to each artificial neural node in an AI network to produce such a trace. The practical question is, who has time to read it?!

    Neuroscientists have pointed out that most of the brain’s heavy lifting happens unconsciously. Conscious awareness is a separate function which, if added to the task, slows it down. The toddler is very conscious of where she puts down each step, and the successful (walking) or unsuccessful (falling) results. But eventually it becomes automatic behavior, which she no longer thinks about…until its time to ride a bike, or roller skate, or drive a car, and the painful self-awareness comes back into play.

    And I suspect that’s going to be the same problem with trying to run a trace of an artificial neural net. So, if the programmers of the neural network cannot explain how it works, then we’re in a fix. As my boss used to say, “If you have to run a trace, you’re lost”. (Nevertheless, I found where the bug was many times by being forced by the trace to follow the logic in precise detail).

    Reply

Leave a Reply

%d bloggers like this:
Skip to toolbar