Neuromorphic Engineering: Biomimicry from the Brain
What is the brain? Researchers conceive of neurons as information processing units, meaning that the circuits formed by neurons support logical and mathematical operations. In this view, the brain is a computer. But it was not always so.
In the 17th Century, the brain was mechanical, thought to operate in a manner similar to powered and geared machines like engines and clocks, which were at the forefront of innovation at the time. And long before the brain was mechanical, it was hydraulic. Philosophers proposed that fluids in the nervous system underlie health, emotions, personality, and movement. Variations on this analogy prevailed from the ancient Greek era through the time of Descartes.
“…is the brain really a computer?”
Tracing these metaphors reveals a historical pattern: Western science’s attempts to understand the brain have consistently drawn from whatever was the cutting edge of technology at the time. At every stage, the most advanced applications of science provided metaphors for the brain’s functioning. Evidently, the brain is so complex and inscrutable that the only way to depict it accurately is by analogy to the most current technology available.
In previous eras, these metaphors were overturned as science progressed. So how good is our metaphor now – is the brain really a computer? Many still debate the nature of information in the brain and the details of how neural computations are carried out, but the prevailing metaphor is powerful.
One of the strongest pieces of evidence that the brain is a computer is the observation that neuroscience and engineering are converging. While neuroscientists theorize about how the brain can implement basic logical operations or routines, engineers are looking to the brain as a how-to guide.
Despite incredible progress, computers still can’t perform basic functions like vision, hearing, and motor coordination at the same level as the brain. More advanced functions, like object recognition and decision making, are even more challenging. As computers fall short of the low-level benchmarks, engineers pay more attention to properties of neural systems that were once considered irrelevant to the brain’s computational principles.
In contrast with digital computers, the brain works slowly and utilizes many types of different computing elements – arrays of identical transistors on a silicon chip don’t look anything like the tangled web of multifarious cell types in the brain. Moreover, the brain is rife with randomness and imprecision. But rather than being forced to surmount these unfortunate byproducts of undirected evolution, the brain might exploit these qualities to achieve its remarkable performance. This possibility challenges researchers to radically redesign computers according to principles observed in the brain.
To radically redesign a computer, the most obvious starting point is the bottom. Historically, computers have been digital, meaning that they encode information as sequences of 1s and 0s known as “bits,” and perform precise operations on these bits – rules for exchanging 0s and 1s – to perform mathematical calculations, image processing, networking, and other functions.
But instead of operating within this digital paradigm, a computer could be reimagined as an analog device. Unlike digital systems that operate over binary signals, analog systems operate over smoothly varying quantities, generally represented in electronics by voltage, current, charge, or frequency. Whereas digital signals are all-or-nothing, analog signals exist along a continuum.
Why use analog signals? When information is represented by a smoothly varying signal, it is sometimes easier to manipulate than a digital signal. For example, when two lights shine on two photodiodes (electronic devices that convert light to current) they generate currents that are proportional to the lights’ intensities. If these currents pass into two equal resistors that both connect to a common point on the other side, the output current will be the average of the two input currents. This description is an example of a simple analog computation, but much more complicated analog circuits have been designed to perform other tasks.
“Feedback and reciprocity between neuroscience and engineering knits the two fields closer and closer together.”
Beginning in the mid 1980s, engineers began to design analog systems that mimic the nervous system. One of the earliest pioneers of these so-called neuromorphic systems was Carver Mead, whose long career at Caltech included the development of vision, hearing, and touch sensors patterned after their biological counterparts. As a student and then a professor at Caltech, Mead studied the physical properties of special electronic switches called transistors, which were quickly becoming the key elements of modern electronic devices because of their small size, long lifespan, and efficiency handling billions of operations per second. Although transistors were designed to operate as digital switches, Mead sought to exploit the analog properties of transistors to mimic properties of vision and hearing in the brain.
As a graduate student working with Mead, Misha Mahowald helped build an artificial retina, using electronics to emulate the properties of cells in the back of the eye that convert light to neural impulses. Mahowald patterned her silicon retina after the connections in the three layers of cells at the back of the biological retina – photoreceptors, horizontal cells, and bipolar cells – where light begins its transformation from photons to vision. Her analog circuit performed what she identified as the most important function of those three cellular layers: reducing the extremely wide dynamic range of incoming light to a narrower range. This function is crucial for extracting important information from small local differences in brightness and contrast. To understand why this is important, consider that the average brightness of the global environment can vary by a factor of a million depending on the time of day and other environmental conditions!
Most of us are already familiar with this problem in the context of the HDR (high dynamic range) setting on smartphone cameras. With this setting, your camera takes a few shots of a scene at different exposures and then edits these shots together, to capture more detail in the brightest and darkest spots. Mahowald’s artificial retina gives us a more rigorous understanding of why HDR images more closely resemble what we perceive with our eyes than does a single exposure.

Building on the success of her invention, Mahowald went on to found the Institute of Neuroinformatics at the University of Zurich. The Misha Mahowald Prize for Neuromorphic Engineering, now in its second year, honors Mahowald’s scientific legacy and her profound contributions to both neuroscience and engineering.
Feedback and reciprocity between neuroscience and engineering knits the two fields closer and closer together. As neuromorphic technology advances, neuroscientists increasingly turn to computer design to refine theories about how the brain processes information. These updated theories then feed back into computer engineering as design principles. As this cycle iterates, neuroscience and computer engineering become more inseparable, and our understanding of the brain becomes more accurate. With all due respect to Descartes, a similar affinity between neuroscientists and hydraulic engineers is extremely unlikely.
Artwork by Sean Noah.
References
1. Friston KJ. Canonical microcircuits for predictive coding. [Neuron. 2012] – PubMed – NCBI. 2013;76(4):695-711. doi:10.1016/j.neuron.2012.10.038.Canonical.
2. Indiveri G, Horiuchi TK. Frontiers in neuromorphic engineering. Front Neurosci. 2011;5(OCT):1-2. doi:10.3389/fnins.2011.00118.
3. Mahowald MA, Mead C. The Silicon Retina. Sci Am. 1991;264(May):76-82. doi:DOI: 10.1038/scientificamerican0591-76.
The notion that brains “process information” is a bunch of nonsense, but it is merely the newest instantiation of indirect realism – an inherently ridiculous notion that implies an inner man…an homunculus. One of the reasons that it is propagated so strongly is the mistaken notion that it has something to do with Shannon-type information but it does not. The sorts of “information” that is being touted is ordinary ol’ information like that found in, say, a grocery list. This means that the notion that “information,” in the ordinary sense, like “computation,” is physical (is there anything else?) is nonsense (perhaps I mentioned that already). There is nothing in particular about, say, a device that makes it a “computer.” The thing I am looking at is part of a computer, but an abacus is also a computer. Finally, an animal’s brain is changed by exposure to the kinds of environments that “produce learning,” but it doesn’t follow that anything like “information” (or “representations”) is stored, retrieved and analyzed in order for behavior to occur. Hope this helps…
Glen M. Sizemore, Ph.D., retired ne’er-do-well professional scientist and International Man of Mystery.
Hi Glen,
Thanks for your opinion. If you’re interested in taking in more viewpoints about how the brain processes information, I’d recommend these two talks from opposite sides of the debate:
https://www.youtube.com/watch?v=Y3ROyorFv5E
https://www.youtube.com/watch?v=mUpJfFr1Q0I
That said, I tend to agree with you that the current paradigm of brain-as-computer isn’t perfect. And I am especially skeptical about theories of Shannon information in the brain. In this piece, I just try to point out that computer engineers are finding fruitful inspiration by looking at how neural systems work.
I do disagree with your assertion that “information” is a meaningless concept, though. I’d love to hear more about why you think a homunculus is implied by theories of information processing in the brain.
Thanks,
Sean
I’ll put my response to Sean here as I can’t get the more specific “reply” function to work:
Hi Sean,
I may check out the attachments, but I’m more interested in asking if the notion of information-processing – as applied to behavior and its physiological mediation – is conceptually coherent.
The issue of getting inspiration from “neural systems” is an entirely separate issue from the point I was making, and the AI-NN people take inspiration, not only from neurons, but from fundamental behavioral processes, studied by the natural science of behavior, as well – they even adopted the term “reinforcement”! This has not, ironically, made many sciences (or “sciences” as the case may be) whose subject matters involve behavior, pay much attention to the natural science of behavior. How is it not obvious that notions of mainstream psychology (i.e., “cognitive psychology”) like “representations” and “information-processing” are unnecessary at best? Ah well…enough of that…
I never said that “information” was meaningless, just that it is used in the ordinary sense; your brain “storing information” is, philosophically and functionally, no different than sticking a grocery list in your pocket. My point was that “information theory” (cf, Shannon) cannot be used to add scientific credence to information-processing approaches to psychology and neuroscience. And the fact that the term “information” is used in its ordinary sense is important concerning its conceptual implications. Shannon information may be strictly a matter of something’s physical state, but ordinary information is not – Japanese characters are not “information” because of their “physics” – to me, for example, they simply are not “information.” Something is “information” if it functions as information. But this functioning requires an organism whose behavior bears a particular relation to the alleged information. Hence, “information in the brain” presupposes some animal or animal-like thing for which the “information” has, in ordinary language, “meaning” – an animalunculus as it were.
Let us recall what ol’ Theophrastus said a few thousand years ago about what are now called “representations,” which are closely related to “information-processing” (this will be sort of a paraphrase): “It seems strange to me that he would claim that we hear a bell by way of an inner bell for we should then have to explain how it is that we hear the inner bell – the old issue would still confront us.”
Cordially,
Glen
Hi Glen,
Thanks for commenting and starting this lively discussion. I do not follow your argument that information in the brain is not Shannon-information. Shannon’s definition of information—reduction in uncertainty—applies here, as it does to Japanese characters as well. This is about a probabilistic reduction in uncertainty, not “physical state.” You write that, to you, “they simply are not ‘information.’” You then change the definition of information to a circular definition that requires a homunculus. According to your definition of information, you would be right. But not if we’re speaking about Shannon information.
I’ll quote David Krakauer (president of the Santa Fe Institute) from his conversation on this topic with Sam Harris (https://www.samharris.org/blog/item/complexity-stupidity):
“It’s like going from the billiard balls all over the table to the billiard balls in a particular configuration. Very formally speaking, you have reduced the uncertainty about the world. You’ve increased information, and it turns out you can measure that mathematically. The extent to which that’s useful is proved by neuro-prosthetics. The information theory of the brain allows us to build cochlear implants. It allows us to control robotic limbs with our brains. So it’s not a metaphor. It’s a deep mathematical principle. It’s a principle that allows us to understand how brains operate and reengineer it.”
So this question of information processing is not just a theoretical question. It’s also an empirical question and it’s already been answered.
-Joel
FYI, Joel: I responded to your response to my comment, but I did so in a general comment (i.e., NOT by clicking “reply under your post as I have done here). I am telling you this only because you did not reply to my rebuttal of what you said and I wanted to make sure that you weren’t not-responding simply because you were not aware that I had answered your criticism. I suspect that you know I responded but, for whatever reason, choose not to respond. Thought I would give you a chance though.
Cordially,
Glen
Hi Glen,
Thanks for your reply, I think we have made some progress in understanding each other. I also think in some ways we’ve been talking past each other. You’re right that Shannon information does not address semantics. Depending on the application, that may or may not be an actual shortcoming of Shannon information. But semantics is not always necessary. For example, I can play (and win) a game of hangman without knowing the meaning of the word being guessed. I don’t know any Japanese and can’t understand any Japanese characters. But, if I simply know which characters are more common and which characters are less common, I can play strategically and even win.
Regarding your point that there is some sort of dichotomy between the “physical” and “ordinary” definitions of information, Shannon’s information theory is really not a branch of physics, but probability theory. Information theory is abstract enough to deal with subjects ranging from particles to Japanese characters (the latter would never fall under the umbrella of physics).
Sean’s article discusses very low level computations performed in the retina. Such low level computations can be handled by circuits (neural or silicon) that flexibly solve many problems using similar abstract principles. Like the hangman game, semantics are not part of the picture at this level. Since you write that you “have no doubt that Shannon-type information will probably play a role in describing the physiological mediation of behavior,” I think we may at least agree on the basics.
I also don’t like argumentum ad populum, but since you bring it up, have a look at this Aeon essay (https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer) that is arguing from your point of view. There are over 600 comments which are, as a whole, very intelligent and very critical (you can find the responses in the left margin).
Best,
Joel
JF: Hi Glen,
Thanks for commenting and starting this lively discussion. I do not follow your argument that information in the brain is not Shannon-information. Shannon’s definition of information—reduction in uncertainty—applies here, as it does to Japanese characters as well.
GS: [Once again, I’ll reply using the general reply function – I can’t seem to get replying to a sub-post to work…sorry.] Hi Joel. The sort of information that is alleged to be “in the brain” is, no matter what is said explicitly, the ordinary sort that we, in ordinary parlance, “use to make decisions” – like a list of items of what to get at the store, or “you shouldn’t give dogs Tylenol,” or “hook the jumper cables up negative to negative, positive to positive.” This is the sort of stuff that is said to “be in the brain.” But this sort of “meaningfulness” is not a part of Shannon’s definition, at least as far as I understand it, and the notion that it IS part of it is vigorously disputed. And, even though I am no fan of the argumentum ad populum, I would urge you to explore the internet on this topic; I am certainly no expert on information theory in the technical, Shannon sense, but the consensus on the ‘net seems to be that I am correct. But, again, I have already included the caveat concerning the fallibility of information (in the ordinary sense!) about what ideas are popular. I Googled “ordinary-language meaning of information vs. Shannon’s meaning” and the internet, in its infinite wisdom, changed the “vs.” to “and” and returned a number of links. Anyway…I suggest you do more research on this topic, Joel. But – even though I claim no expertise – I would say this: Shannon’s information is about a quantitative treatment of some aspects of the physics that occurs from, say, the time AFTER somebody speaks into a telephone until immediately after sound comes out of the receiver. This involves information in Shannon’s sense, but not necessarily in the ordinary sense involving meaning (whatever that is). Say someone transmits random letters using a telephone. The physics of this transduction would involve information in the Shannonian sense, but no information in the ordinary sense. But, again, it is the ordinary sense in which people talk about “information” in cognitive “science.” Otherwise, it can’t do the theoretical work it is supposed to. BTW, I have no doubt that Shannon-type information will probably play a role in describing the physiological mediation of behavior but it won’t be t
GS: But the amount of information IS a function of the physical state – the amount of information just depends, also, on the number of possible states.
JF: You write that, to you, “they simply are not ‘information.’” You then change the definition of information to a circular definition that requires a homunculus.
GS: I meant “to me as a non-reader of Japanese (characters are not “information” in the ordinary sense).” My point being that there is some way to describe a series of Japanese characters in Shannon-information terms (I’m guessing) but, to me as a non-reader of Japanese characters, it is not information in the ordinary sense, and it is the ordinary sense in which “information” is usually mentioned in cognitive psychology and the fields it has corrupted (like much of neuroscience – or neuro”science” as the case may be). In any event, I gave no definition of information in the ordinary-language sense and my guess is that “information” would defy unitary definition – hardly unusual for ordinary-language terms.
JF: According to your definition of information, you would be right. But not if we’re speaking about Shannon information.
GS: Yup. But when it is said that “the brain stores ‘information’” and so forth, it is the ordinary-language “definition” that is used. Geez, what else is “knowledge” supposed to be? Knowledge is, one way or the other, the darling of cognitive “science” – err…check out the etymology. But *we* are likely, nowadays, to say that “stuff” like “knowledge of how my coffee-maker works” (and a cause of my behavior of operating it) is stored in the brain in some sort of code that is treatable in terms of Shannon’s information. I hope I have driven home the point that, overwhelmingly so, you will see that the “technical” use of “information” by cognitive psychologists (and the fields they have corrupted) is the ordinary usage (with a veneer of Shannon to add scientific gloss) – and that ordinary usage always implies someone as a consumer of information in order for whatever-it-is to BE information. But, as I said, a piece of paper with a message written in Japanese characters on it is not information to me – it can’t stand in relation to my behavior the way it can for someone who reads Japanese characters and for whom it is information. An information-theoretic treatment, OTOH, of an expanding gas does not really entail any consumer of the information that makes it information – it’s physics, right?
JF: I’ll quote David Krakauer (president of the Santa Fe Institute) from his conversation on this topic with Sam Harris (https://www.samharris.org/blog/item/complexity-stupidity):
“It’s like going from the billiard balls all over the table to the billiard balls in a particular configuration. Very formally speaking, you have reduced the uncertainty about the world. You’ve increased information, and it turns out you can measure that mathematically. The extent to which that’s useful is proved by neuro-prosthetics. The information theory of the brain allows us to build cochlear implants. It allows us to control robotic limbs with our brains. So it’s not a metaphor. It’s a deep mathematical principle. It’s a principle that allows us to understand how brains operate and reengineer it.”
GS: If I do certain neuroscience experiments – say, looking at “fear conditioning” in rats where I can arrange the environment of the rats and manipulate and measure aspects of brain activity – it will be said that I have “found the representation of X” (or the “engram” yada yada yada) or “reproduced the representation of X such that the rat does Y” and so forth. But I assert that that is misleading and conceptually incoherent. We can agree on what is manipulated and measured (that’s what is supposed to be in the Methods section of scientific papers) but calling it a “representation” (or “information” etc.) is an assumption and always will remain so. That is precisely why conceptual analysis of key assumptions in science is so important. I have to chuckle when scientists argue that “philosophy” is irrelevant to science – this misunderstands the profound role played by purely conceptual issues in the scientific endeavor. Whenever you hear “must” as in “something must have been stored and retrieved in order for the animal to [do X]” you should immediately be on guard that assumptions are being smuggled in as facts. This is a serious, serious problem in cognitive psychology and the fields it has corrupted.
JF: So this question of information processing is not just a theoretical question. It’s also an empirical question and it’s already been answered.
GS: Needless to say, I disagree. What one sees is the hopeless conflation of conceptual and empirical issues. Anyway, this is long, and the thread is already in danger of going cold so I’ll send this along without proofreading it – my apologies if there are typos, clumsy explanations, ambiguity etc.
Oops…
I wrote: “BTW, I have no doubt that Shannon-type information will probably play a role in describing the physiological mediation of behavior but it won’t be t”
I should have said: “BTW, I have no doubt that Shannon-type information will probably play a role in describing the physiological mediation of behavior but it won’t be the role that cognitivists in psychology and neuroscience had envisioned.”
I made an additional error – I accidentally omitted something Joel said: “This is about a probabilistic reduction in uncertainty, not ‘physical state.’”
That statement preceded my comment: “But the amount of information IS a function of the physical state – the amount of information just depends, also, on the number of possible states.”
Apologies for my carelessness.
G.
JF: Hi Glen,
Thanks for your reply, I think we have made some progress in understanding each other. I also think in some ways we’ve been talking past each other. You’re right that Shannon information does not address semantics. Depending on the application, that may or may not be an actual shortcoming of Shannon information. But semantics is not always necessary. For example, I can play (and win) a game of hangman without knowing the meaning of the word being guessed. I don’t know any Japanese and can’t understand any Japanese characters. But, if I simply know which characters are more common and which characters are less common, I can play strategically and even win.
GS: Yes, but that is all beside any point I made. Basically, I am simply saying that when behavior is “explained” by “information” – whether it be the “information” that comprises the alleged representation interposed between the world seen and the “experience of seeing” or “information” that is retrieved and (LOL) “acted upon” – the alleged information is of the ordinary sort.
JL: Regarding your point that there is some sort of dichotomy between the “physical” and “ordinary” definitions of information, Shannon’s information theory is really not a branch of physics, but probability theory. Information theory is abstract enough to deal with subjects ranging from particles to Japanese characters (the latter would never fall under the umbrella of physics).
GS: I fail to see how any of this is relevant to the points I made. Plus, you are reading a little too much into my use of the word “physics.” If I pull a grocery list out of my pocket, the marks on the paper have some physical description. That is what I mean by “physics.” But, for example, a description of the marks on the page does not explain my behavior in getting the groceries. And perhaps “description” is not a good term. If I take a photo of the note, I will create something (the photo) that has certain geometric correspondences to the note – that is “physics.” And I can describe a note in terms of information theory given that the potential marks (e.g., “a.” “b” etc.) are finite in number since the number of their potential arrangements will also be finite. But none of that will explain a person’s response to the note because such an explanation is bigger than a description of the marks on the page. No amount of “transformation of the input” (i.e., “processing”) will explain the fact that the person walks out of the store with certain items.
JL: Sean’s article discusses very low level computations performed in the retina.
GS: Does my flashlight “compute” anything? What about the Moon? Does it “compute its orbit”? These are not rhetorical questions. The retina’s functioning is simply the result of a complex dynamic system. What makes this “computation”? This is like the old issue of “Does Nature follow laws?” The answer is, “of course not.” Scientific laws are comprised of specialized verbal behavior that controls *our* behavior. Nature does not set up and solve differential equations…know what I mean? And ultimately, not even we “follow laws.” That is simply an ordinary-language facon de parler. Or rather, “to follow a law” IS to behave in particular ways, under particular conditions comprised of the variables, and their interactions, of which our behavior is a function. And it is not even necessary that a person be able to state the alleged rules or laws in order to apply the notion that humans follow rules or laws – obviously since a generalization of the notion leads to the claim that “Nature follows laws.” But I’m rambling a bit…but not that much. Anyway, the notion that our brains “process information” (and that’s where behavior comes from) is simply an assumption. There are no data, nor could there be as far as I can see, that “support” the notion that we “process information.” Nor could there be any that falsify it. At the risk of belaboring what might already be clear, what would you say if I asked you for evidence that the retina “computes”? Think carefully…you might describe some different environmental events (e.g., some object that reflects or emits light – you know, a “visual stimulus”) and orderly dynamic changes within the neural system comprising the retina etc. But that is not evidence that it computes – it is simply what you are calling “computation.”
JL: Such low level computations can be handled by circuits (neural or silicon) that flexibly solve many problems using similar abstract principles.
GS: Well…I can believe that! My flashlight can even compute! And it is much better than me at solving the problem of emitting light! And if this is not computation, why not?
JL: Like the hangman game, semantics are not part of the picture at this level. Since you write that you “have no doubt that Shannon-type information will probably play a role in describing the physiological mediation of behavior,” I think we may at least agree on the basics.
GS: That seems unlikely. As to “hangman,” you seem to be pointing out that there are things for which Shannon’s theory is useful. I don’t doubt that. But the kind of information that is supposed to somehow guide behavior (“See! That rat stored the location of the food as information in its brain!”) is information of the ordinary sort. But “ordinary information” is only information by way of its functioning vis-a-vis an organism. So if the information is supposedly used by a brain part – and that is certainly the implication – then the brain part must do the same things as a whole organism. But, then, there you have it! Both a homunculus and an infinite regress. That should, you know, send up warning flags.
JL: I also don’t like argumentum ad populum, but since you bring it up, have a look at this Aeon essay (https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer) that is arguing from your point of view. There are over 600 comments which are, as a whole, very intelligent and very critical (you can find the responses in the left margin).
Best,
Joel
GS: Yes…I have read Epstein’s paper. Needless to say, it is not unusual that we share a view – we are both academic descendants of Skinner – Epstein much closer to the source than me, though!
G.
Hi Glen,
You write “But none of that will explain a person’s response to the note because such an explanation is bigger than a description of the marks on the page.” What are your thoughts on the process by which an embryo uses the information in DNA to multiply and divide into a whole organism? Do you also object to this description of DNA as information?
The reason why you reject the idea that flashlights or moons can compute is that you see this as an active process, whereas I see this as a passive process. What is logically incoherent about the description of the flashlight as a simple computer? Or the description of the Earth-Moon system as a computer? This only becomes a paradox if you believe that computation is an active process, rather than a passive process. If one follows this logic, even a Macbook or an iPhone is not a computer, as each would require some magic homunculus to do the arithmetic inside its chips.
Best,
Joel
Joel: Hi Glen,
GS: Hi Joel.
Joel: You write “But none of that will explain a person’s response to the note because such an explanation is bigger than a description of the marks on the page.” What are your thoughts on the process by which an embryo uses the information in DNA to multiply and divide into a whole organism? Do you also object to this description of DNA as information?
GS: Sort of. The “conceptual baggage” is, perhaps, less damaging than when it comes to psychology where the rise of cognitivism has assured that mainstream psychology will never be a natural science. Something like the closet behaviorism of the “embodiment movement” may derail representationalism and allow mainstream psychology to move towards being a natural science but then it would be a new mainstream psychology, contextualistic and selectionist instead of mechanistic/reductionistic and ahistorical. Anyway, in case you have missed it, modern genetics as it is peddled is decried by those who count themselves among the devotees developmental systems theory (DST) as preformationism. I agree. So…in case it isn’t clear, I do not think that “…an embryo uses the information in DNA to multiply and divide into a whole organism.” The development of an organism is best seen as the result of a system describable by differential (or difference) equations – note that I am not saying that the outcome is the result of the integration of the equations – the equations are verbal behavior that controls us…not the subject matter. One of Susan Oyama’s books is called “The Ontogeny of Information” – think about the meaning of that. For mainstream genetics, the “information” has a phylogeny, but no ontogeny. The mature organism is “encoded” in the DNA and will emerge – essentially despite the ontogenic environment – as if by God’s plan. I am not the first to offer that metaphor. And, less you think I am offering what passes for interactionism in the nature/nurture controversy, I am not. Mainstream “interactionists” still think it is possible to parse variability as to that due to ontogeny and that due to phylogeny.
Joel: The reason why you reject the idea that flashlights or moons can compute is that you see this as an active process, whereas I see this as a passive process.
GS: I’ve heard that before (this ain’t my first rodeo). But, in the case of the latter, what separates your conception of “computation” from good ol’ fashioned efficient causation? What extra philosophical work is done by “computation” if it is just efficient causation?
Joel: What is logically incoherent about the description of the flashlight as a simple computer?
GS: It isn’t used to do computations or process information which is what makes computation computation. An abacus is just a bunch of beads on rods suspended in a box – except that it is used to do computation and is, thus, a computer.
Joel: Or the description of the Earth-Moon system as a computer? This only becomes a paradox if you believe that computation is an active process, rather than a passive process.
GS: My alleged beliefs are irrelevant. What matters, as ol’ Wittgenstein might say, is the “grammar of the word.”
Joel: If one follows this logic, even a Macbook or an iPhone is not a computer, as each would require some magic homunculus to do the arithmetic inside its chips.
GS: Yes, but the logic is yours, not mine. Computations and information (in the ordinary sense) require a “consumer” as it were. In the case of “information in the brain” or the “brain doing computations,” the implied consumer must be the implied homunculus that “processes information,” makes decisions, and then drives the body around like a Cadillac.
Cordially,
Glen M. Sizemore, Ph.D., retired ne’er-do-well professional scientist and International Man of Mystery.