The thesis that is true is that (just about) any causal system―whether it’s a galaxy, a gall-bladder or a grain of sand ―can be modeled computationally, thereby (if successful) fully “capturing” (and hence explaining) its (relevant) causal mechanism―in other words, explaining, causally, how it works.
This explanatory power is certainly something that psychology, cognitive science, neuroscience and artificial intelligence would want, for the kinds of things they study and hope to explain causally: organisms, their brains, and their behavior, as well as artificial devices we build that are capable of similar kinds of behavior. But it’s an explanatory power that physics, chemistry, biology and engineering already have for the kinds of things that they study, without the need for “A Computational Foundation for the Study of Planetary Motion” (or of Helium, or of Hemoglobin, or of Convection Heaters). It’s simply the ubiquitous observation that―alongside language and mathematics―computers and computational algorithms are useful tools in explaining the things there are in the world and how they work. (This is what Searle 1980 called “Weak AI.”)
Chalmers’s second thesis is that―unlike, say, flying, or digesting, which can likewise be modeled and explained computationally (
I will try to flesh out both of Chalmers’s theses without getting bogged down in technical details that are interesting but not pertinent to this fundamental distinction.
Computation is symbol manipulation: symbols are objects of arbitrary shape (e.g., 0 and 1) and they are manipulated on the basis of rules (“algorithms”) that operate on the
How do computations do useful things? There are many ways. Numerical algorithms compute quantitative results we are interested in knowing. Desk calculators implement numerical algorithms. Boolean (and/or/not) search in Google’s database can retrieve documents or data we are interested in. Virtual Reality can fool, entertain, or train our senses and movements. NASA’s flight simulations anticipate problems that might arise in actual space flights. If Copernicus and Galileo had had digital computers, they might (just might!) have reached their conclusions faster, or more convincingly. Appel & Haken proved the four-color theorem with the help of computation in 1976. And if Mozart had had a computer to convert keyboard improvisation into metered notation, ready to print or edit and revise online, humankind might have been left a much larger legacy of immortal masterpieces from his tragically short 35 years of life.
A word about causal structure―a tricky notion that goes to the heart of the matter. Consider gravitation. As currently understood, gravitation is a fundamental causal force of attraction between bodies, proportional to their respective masses. If ever there was a prototypical instance of causal structure–causing―gravitational attraction is such an instance.
Now gravitational attraction can be modeled exactly, by differential equations, or computationally, with discrete approximations. Our solar system’s planetary bodies and sun, including the causal structure of their gravitational interactions, can be “mirrored” formally in a computer simulation to as close an approximation as we like. But no one would imagine that that computer simulation actually embodied planetary motion: It would be evident that there was nothing actually moving in a planetary simulation, nor anything in it that was actually exerting gravitational attraction. (If the planetary simulation was being used to generate a Virtual Reality, just take your goggles off.)
The computer implementation of the algorithm would indeed have causal structure―a piece of computer hardware, computing, is, after all, a physical, dynamical system too, hence, like the solar system itself, governed by differential equations. But the causal structure of the implementation, as a dynamical system in its own right, would be the
So in what sense does the causal structure of the computational model “mirror” the causal structure of the thing that it is modeling? The reply is that it mirrors it
5. Formally Mirroring Versus Physically Instantiating Causality
But this is all obvious. Everyone knows that the mathematical (or verbal) description of a thing is not the same kind of thing as the thing itself, despite the formal invariance they share. Why would one even be tempted to think otherwise? We inescapably see
So it is evident by inspection in the case of physics, chemistry, and biology (e.g., when we note that synthetic hearts pump blood, but computational hearts do not), and even in mathematics, that the “causal structure” of the model (whether computational or analytic, symbolic or numeric, discrete or continuous, approximate or exact) may be the right one for a full causal
6. Cognition Is Visible Only To the Cognizer
How could Chalmers (and so many others) have fallen into the error of confusing these two senses of causality, one formal, the other physical? The reason is clear and simple (indeed Cartesian), even though it has been systematically overlooked. This very error is always made in the special case of cognition. What on earth is cognition? Unlike, say, movement, cognition is
Let’s contrast the case of the brain and its invisible cognizing with the case of planets, moving, as well as with the case of a bodily organ other than the brain, one for which the problem of invisible properties does not arise: the heart, pumping. The reason we would never dream of saying that planetary motion was just computational―or of saying that planets in a computational model were actually moving because the model “mirrors” their “causal structure”―is simply the fact that planetary motion is
8. Bodily Activity, Brain Activity, and Cognizing
Now imagine the same thing for the brain. It’s a bit more complicated, because, unlike the heart, the brain is actually “doing” not one, nor two but
So, unlike the planets and the heart, which are doing just one kind of thing, all of it fully observable to us (moving and pumping, respectively), the brain is doing
Now we are in a position to pinpoint exactly where the error keeps creeping in: No one would call a cognitive model a success if it could not be demonstrated to generate our behavioral capacity―if it could not do what we can do. So the question becomes: what kind of model can generate our behavioral capacity? That’s where the Turing Test (TT) comes in (Harnad 2008): A model can generate our behavioral capacity if it can pass TT―the full robotic version of TT, not just the verbal version (i.e., the ability to
Let’s set aside the second kind of thing that brains do― internal brain activity―because it is controversial how many (and which) of the specific properties of brain activity are necessary either to generate our behavioral capacity or to generate cognition. It could conceivably turn out to be true that the only way to successfully generate cognition is one that preserves some of the dynamic (noncomputational) features of brain activity (electrochemical activity, secretions, chemistry etc.). In that case the fact that those observable (neural) features were missing from the computational model of cognition would be just as visible as the fact that motion was missing from the computational model of planetary motion (or that a computational plane was not flying, or a computational stomach not digesting). Let’s call the hypothesis that, say, biochemical properties are essential ―either to generate our behavioral capacity or to generate cognition― “neuralism.”
It’s important to understand that my critique of the thesis that cognition is computation does
The basis of my critique of cognitive computationalism is, however, analogous to neuralism, and could perhaps be dubbed “dynamism.” It is not that the brain’s specific dynamic properties may be essential for cognizing, but that
Consider behavioral capacity first: Let us agree at once that whatever model we build that succeeds in generating our actual behavioral capacity―i.e., the power to pass the full robotic version of TT, for a lifetime―would definitely have
Sensing, like moving (and flying, and digesting), is not implementationindependent symbol-manipulation. Consider the question of whether there could be a successful TT-passing robot that consisted of nothing other than (i) sensors and movable peripheral parts plus (ii) a functional “core” within which all the work (other than the I/O [sensory input and motor output] itself) was being done by an independent computational module that mirrored the causal structure of the brain (or of any other system capable of passing the TT). This really boils down to the question of whether the causal link-up between our sensory and motor systems, on the one hand, and the rest of our nervous system, on the other, can in fact be split into two autonomous modules―a peripheral sensorimotor one that is necessarily noncomputational, plus a central one that is purely computational.
makes sense. To me it seems just as unlikely as the possibility that we could divide heart function (or flying, or digestion) into a noncomputational I/O module feeding into and out of a computational core. I think
But let us agree that the possibility of this functional partition is an empirical question,
13. It Feels Like Something To Cognize
This is the point to remind ourselves that we’ve left out the third burden of cognitive theory, in addition to (1) behavioral capacity and (2) brain function (which we’ve agreed to ignore): Even if we
It is not that Chalmers is unaware of this distinction. He writes:
14. The “Dancing Qualia” Argument
But Chalmers thinks his “dancing qualia” argument shows that feeling must, like computation itself, be an implementation-independent property, present in every implementation of the algorithm that successfully captures the right causal structure, no matter how radically the implementations differ:
What Chalmers is saying here is that if we hypothesize that there could be two physically different but “organizationally invariant” implementations of the same causal structure, one feeling one way and the other feeling another way (or not feeling at all), both variants implemented within the same hardware so that we could throw a switch to flip from one implementation variant to the other, the fact that the causal structure was the same for both variants would prevent the hypothetical difference in feeling from being felt. So the causal invariance would guarantee the feeling invariance.
But the trouble with this argument is that it
15. Implementation-Dependent Properties
The reason the flip/flop thought experiment could not guarantee that all implementations of the causal structure of the solar system or the heart would move and beat, respectively, is that
In the case of feeling, the reason this very same distinct possibility (that feeling is not a computational property) is not equally evident (as it ought to be) is that
For if feeling is (invisibly) like moving, then flipping from one causally invariant implementation to another could be like flipping between two causally invariant implementations of planetary motions, one that moves (because it really consists of planets, moving) and one that does not (because it’s just a computer algorithm, encoding and impementing the same “organizational invariance”): In the case of feeling, however, unlike in the case of moving, the only one who would “see” this difference would be the cognizer, as the “movements” (behavior) would (
Chalmers’s “Dancing Qualia” argument simply does not take this distinct possibility into account at all. It won’t do to say that there might be a felt difference, but it would have to be a “thin” one (because it could have no behavioral consquences). Chalmers gives no reason at all why the state difference could not be as “thick” as thick can be, as mighty as the difference between mental day and nonmental night, if the flip/flop were between a feeling and a non-feeling state, rather than just between two slightly different feeling states with a “thin” difference between them. But because Chalmers has imposed his premise of “functional” (i.e., empirical, observational) indistinguishability (with feeling unable to contradict that premise, because feeling is not externally observable), this would entail that
I am not saying that I believe there could be a behaviorally indistinguishable Zombie state (Harnad 1995)―just that it cannot be ruled out simply on the grounds of having assumed a premise! (Or, better, one must not assume premises that could allow empirically indistinguishable Zombies.) Nor is it necessarily true that “noticing” has to be “functional”―if functional means computational: it
“Demoting” feeling to a dynamical property, its presence or absence dependent on conformity with the right differential equations rather than the right computer program releases feeling from having to be an implementation- independent computational property.
But does cognition have to be
So there’s something to be said for concentrating on solving the “easy” problem of explaining how to pass the TT first (Harnad & Scherzer 2008). I think it is unlikely (for reasons that go beyond the scope of this discussion) that the solution to even that “easy” problem will be a purely computational one (Harnad 1994). It is more likely that the causal mechanism that succeeds in passing TT will be hybrid dynamic/computational―and that the dynamics will not be those of the hardware implementation of the computation (those really
Once that (“easy”) problem is solved, however, only the TT-passing system itself will know whether it really does cognize― i.e., whether it really feels like something to be that TT-passer. (Being able to determine
But even if we had a guarantee from the gods that the TT-passer really cognizes, that still would not explain the
So not only is it unlikely that implementing the right computations will generate cognition, but whatever it is that does generate cognition― whether its causal topography is computational, dynamical, or a hybrid combination of both―will not explain the causal role of consciousness itself (i.e., feeling), in cognition. And that problem may not just be “hard,” but insoluble.1
1If it were true, neuralism—the hypothesis that certain dynamical properties of the brain are essential in order to generate the brain’s full behavioral capacity —would already refute computationalism, because it would make the premise that computation alone could generate the brain’s full causal power false. The truth of neuralism would also refute Chalmers’s “Dancing Qualia,” argument, a fortiori. Even as a possibility, neuralism invalidates the Dancing Qualia argument. Dynamism provides a more general refutation than neuralism; it does not depend on the brain’s specific dynamics having to be the only possible way to generate either the brain’s behavioral capacity or feeling. It is based on the fact that feeling, like moving, is a dynamical property, but, unlike moving, observable only to the feeler. Hence both computationalism and the Dancing Qualia argument could fail, with no one able to detect their failure except the feeler—while the computationalist premise pre-emptively defines the feeler a priori as unable to signal the failure of the premise in any way. This is not just a symptom of the circularity of Chalmers’s premise about the causal power of computation, however. It is also a symptom of the “hard problem” of explaining the causal role of feeling: Except if (contrary to all empirical evidence to date) dualism were true—with feeling being an autonomous psychokinetic causal force, on a par with gravitation and electromagnetism— feeling seems to be causally superfluous in any explanation of cognitive capacity and function, regardless of whether the causality is computational, dynamic or hybrid.