What is the brain? Researchers conceive of neurons as information processing units, meaning that the circuits formed by neurons support logical and mathematical operations. In this view, the brain is a computer. But it was not always so.
In the 17th Century, the brain was mechanical, thought to operate in a manner similar to powered and geared machines like engines and clocks, which were at the forefront of innovation at the time. And long before the brain was mechanical, it was hydraulic. Philosophers proposed that fluids in the nervous system underlie health, emotions, personality, and movement. Variations on this analogy prevailed from the ancient Greek era through the time of Descartes.
“…is the brain really a computer?”
Tracing these metaphors reveals a historical pattern: Western science’s attempts to understand the brain have consistently drawn from whatever was the cutting edge of technology at the time. At every stage, the most advanced applications of science provided metaphors for the brain’s functioning. Evidently, the brain is so complex and inscrutable that the only way to depict it accurately is by analogy to the most current technology available.
In previous eras, these metaphors were overturned as science progressed. So how good is our metaphor now – is the brain really a computer? Many still debate the nature of information in the brain and the details of how neural computations are carried out, but the prevailing metaphor is powerful.
One of the strongest pieces of evidence that the brain is a computer is the observation that neuroscience and engineering are converging. While neuroscientists theorize about how the brain can implement basic logical operations or routines, engineers are looking to the brain as a how-to guide.
Despite incredible progress, computers still can’t perform basic functions like vision, hearing, and motor coordination at the same level as the brain. More advanced functions, like object recognition and decision making, are even more challenging. As computers fall short of the low-level benchmarks, engineers pay more attention to properties of neural systems that were once considered irrelevant to the brain’s computational principles.
In contrast with digital computers, the brain works slowly and utilizes many types of different computing elements – arrays of identical transistors on a silicon chip don’t look anything like the tangled web of multifarious cell types in the brain. Moreover, the brain is rife with randomness and imprecision. But rather than being forced to surmount these unfortunate byproducts of undirected evolution, the brain might exploit these qualities to achieve its remarkable performance. This possibility challenges researchers to radically redesign computers according to principles observed in the brain.
To radically redesign a computer, the most obvious starting point is the bottom. Historically, computers have been digital, meaning that they encode information as sequences of 1s and 0s known as “bits,” and perform precise operations on these bits – rules for exchanging 0s and 1s – to perform mathematical calculations, image processing, networking, and other functions.
But instead of operating within this digital paradigm, a computer could be reimagined as an analog device. Unlike digital systems that operate over binary signals, analog systems operate over smoothly varying quantities, generally represented in electronics by voltage, current, charge, or frequency. Whereas digital signals are all-or-nothing, analog signals exist along a continuum.
Why use analog signals? When information is represented by a smoothly varying signal, it is sometimes easier to manipulate than a digital signal. For example, when two lights shine on two photodiodes (electronic devices that convert light to current) they generate currents that are proportional to the lights’ intensities. If these currents pass into two equal resistors that both connect to a common point on the other side, the output current will be the average of the two input currents. This description is an example of a simple analog computation, but much more complicated analog circuits have been designed to perform other tasks.
“Feedback and reciprocity between neuroscience and engineering knits the two fields closer and closer together.”
Beginning in the mid 1980s, engineers began to design analog systems that mimic the nervous system. One of the earliest pioneers of these so-called neuromorphic systems was Carver Mead, whose long career at Caltech included the development of vision, hearing, and touch sensors patterned after their biological counterparts. As a student and then a professor at Caltech, Mead studied the physical properties of special electronic switches called transistors, which were quickly becoming the key elements of modern electronic devices because of their small size, long lifespan, and efficiency handling billions of operations per second. Although transistors were designed to operate as digital switches, Mead sought to exploit the analog properties of transistors to mimic properties of vision and hearing in the brain.
As a graduate student working with Mead, Misha Mahowald helped build an artificial retina, using electronics to emulate the properties of cells in the back of the eye that convert light to neural impulses. Mahowald patterned her silicon retina after the connections in the three layers of cells at the back of the biological retina – photoreceptors, horizontal cells, and bipolar cells – where light begins its transformation from photons to vision. Her analog circuit performed what she identified as the most important function of those three cellular layers: reducing the extremely wide dynamic range of incoming light to a narrower range. This function is crucial for extracting important information from small local differences in brightness and contrast. To understand why this is important, consider that the average brightness of the global environment can vary by a factor of a million depending on the time of day and other environmental conditions!
Most of us are already familiar with this problem in the context of the HDR (high dynamic range) setting on smartphone cameras. With this setting, your camera takes a few shots of a scene at different exposures and then edits these shots together, to capture more detail in the brightest and darkest spots. Mahowald’s artificial retina gives us a more rigorous understanding of why HDR images more closely resemble what we perceive with our eyes than does a single exposure.
Building on the success of her invention, Mahowald went on to found the Institute of Neuroinformatics at the University of Zurich. The Misha Mahowald Prize for Neuromorphic Engineering, now in its second year, honors Mahowald’s scientific legacy and her profound contributions to both neuroscience and engineering.
Feedback and reciprocity between neuroscience and engineering knits the two fields closer and closer together. As neuromorphic technology advances, neuroscientists increasingly turn to computer design to refine theories about how the brain processes information. These updated theories then feed back into computer engineering as design principles. As this cycle iterates, neuroscience and computer engineering become more inseparable, and our understanding of the brain becomes more accurate. With all due respect to Descartes, a similar affinity between neuroscientists and hydraulic engineers is extremely unlikely.
Artwork by Sean Noah.
1. Friston KJ. Canonical microcircuits for predictive coding. [Neuron. 2012] – PubMed – NCBI. 2013;76(4):695-711. doi:10.1016/j.neuron.2012.10.038.Canonical.
2. Indiveri G, Horiuchi TK. Frontiers in neuromorphic engineering. Front Neurosci. 2011;5(OCT):1-2. doi:10.3389/fnins.2011.00118.
3. Mahowald MA, Mead C. The Silicon Retina. Sci Am. 1991;264(May):76-82. doi:DOI: 10.1038/scientificamerican0591-76.
Latest posts by Sean Noah (see all)
- Neuro Primer: Attention - May 31, 2018
- Machine Yearning: The Rise of Thoughtful Machines - April 11, 2018
- The inescapable nightmare of fatal familial insomnia - October 31, 2017