Chalmers spells out the key idea of implementation as follows:
The account of implementation is more fully specified in terms of the class of
As in the general case, the crucial requirement for implementing a CSA is that the formal structure of the computation mirrors the causal structure of the physical system. This is spelled out as follows:
How does the account of implementation provide a framework for computational cognitive theorizing? It does so, according to Chalmers (p.1), by supporting two theses that characterize the foundational role of computation: (1) a computational description is an abstract characterization of the causal organization of the system; and (2) mental properties are causal invariants.1 In other words, cognition, in fact mentality in general, depends upon causal invariants, and a computational description
Let us focus first on Chalmers’ account of implementation. In the next section we will turn to the foundational claims it is alleged to support.
A preliminary point is perhaps obvious, though worth mentioning explicitly. For an application of the account to have any bite, the two levels of analysis – the computational and the physical – have to be
According to Chalmers’ account, CSAs provide a suitable formalism “for our purposes”, viz., answering the Putnam/Searle challenge and providing a foundation for computational cognitive explanation. One reason to specify the account in terms of CSAs is that the implementation conditions on CSAs are highly constrained:
Another reason for employing the CSA framework is its
Chalmers presumes that the appropriate formal characterization of the causal organization of the human mind will be a very complex CSA description with lots of input and internal-state parameters. As he points out, the requirement that states of the physical system satisfy reliable statetransition rules is what does the work in ruling out Putnam’s and Searle’s trivial implementations. The chance that an arbitrary physical system would satisfy these constraints is vanishingly small.
I shall assume for the sake of argument that Chalmers’ account succeeds at the
It is worth emphasizing that whether or not the CSA formalism is the appropriate computational characterization for explaining cognition is an
A look at some representative examples of computational models of cognitive capacities supports the point. Marr’s (1982) theory of early vision explains edge detection by positing the computation of the Laplacian of the Gaussian of the retinal array. Ullman (1979) describes the visual system recovering the 3D structure of moving objects by computing a function from 4 distinct views of 3 non-coplanar points to the unique rigid configuration consistent with the points. Shadmehr and Wise’s (2005) computational account of motor control explains how a subject is able to grasp an object in view by computing the displacement of the hand from its current location to the target location, i.e. by computing vector subtraction. In a well-known example from animal cognition, Gallistel (1993) explains the Tunisian desert ant’s impressive navigational abilities by appeal to the computation of the displacement vector to its nest from any point along its foraging trajectory.
In none of these cognitive models is the computational characterization a CSA description. The posited computation has little or no combinatorial structure. Rather, the explanatory strategy might be described as ‘functiontheoretic,’ in the sense that the model explains the cognitive capacity in question by appeal not to some arcane, highly complex formal structure, but rather to an independently well-understood mathematical function under which the physical system is subsumed. In other words, what gets computed, according to these computational models, is the value of a mathematical function (e.g. addition, vector subtraction, the Laplacean of the Gaussian, a fast Fourier transform) for certain arguments for which the function is defined. For present purposes we can take functions to be mappings from sets (the arguments of the function) to sets (its values). The theory may go on to propose an
The upshot is that CSA formalism is not generally appropriate for characterizing computational cognitive science as it is actually practiced. But how does that practice stand with respect to the Putnam/Searle challenge? Are the computations posited in the above models implemented by arbitrary physical systems, and if so, does it follow that the explanations of cognition afforded by these models are trivial? I doubt that ‘deviant’ implementations can be conclusively ruled out, but this does not make computational explanation trivial. The argument will require some setting up.
I have argued that computational models do not typically posit computations with complex compositional structure; however, the physical system must still satisfy reliable state transition rules that require that when the system is in the physical state(s) that (under the interpretation imposed by the computational description) realizes the arguments of the function, it goes into the physical state that (under interpretation) realizes the value of the function, for all the arguments for which the function is defined.4 These conditionals have modal force. For the sorts of functions discussed above, satisfying this condition will require significant
I shall have more to say below about constraints on implementing computational models, but for now let me emphasize that there is nothing approaching a
The epistemological context in which computational models are developed is instructive. We don’t start with a computational description and wonder whether a given physical system implements it. Rather, we start with a physical system such as ourselves. More precisely, we start with the observation that a given physical system has some cognitive competence – the system can add, or it can understand and produce speech, or it can see the three-dimensional layout of its immediate environment. Thus, the
With the target capacity in view, the theorist hypothesizes that the system computes some well-defined function (in the mathematical sense), and spells out how computing this function would explain the system’s observed success at the cognitive task. Justifying the computational description requires explaining how computing the value of the function contributes to the exercise of the cognitive capacity. For example, computing the Laplacean of the Gaussian of the retinal array produces a smoothed output that facilitates the detection of sharp discontinuities in intensity gradients across the retina, and hence the detection of significant boundaries in the scene. In other words, the computational description is justified by reference to the use to which the computation is put in the exercise of a manifest cognitive capacity.
Computational theorizing is constrained from above, as it were, by data about the performance of the system, and from below, by knowledge of available neural hardware. The computational hypothesis may predict a pattern of error that the system is prone to make in its normal environment. Observation of the system’s successes and failures at the cognitive task, or discoveries about available neural hardware, may lead the theorist to revise her initial computational hypothesis. Perhaps the device computes the function only for a more restricted domain than initially thought.6 Algorithms may be suggested for how the function is computed, but ultimately these need empirical motivation.
I appealed to the notion of use in considering how the computational description is justified. Appeal to use also helps constrain implementation. As Chalmers points out, a computational description is an abstract specification of the causal organization of the system. But as I noted above, isolating the causal organization responsible for
Take a simple adder. The physical states that (under interpretation) realize the addends and the physical states that (under interpretation) realize the sum stand in a causal-transition relation. In other words, the former states cause the system to go into the latter state (possibly with a significant number of intermediary physical states). In order for the system to be used to compute the addition function these causal relations have to hold
The same point holds for natural computational mechanisms and the neural processes that employ them. Neural processes, it is reasonable to assume, are not sensitive to (merely) quantum-level changes, such that their behavior could be conditioned on such changes. Relatively gross (or macro-level) changes are required for the central nervous system to use a computational mechanism (in particular, to use its
In summary, appeal to use can both help isolate the relevant causal organization responsible for a cognitive capacity and distinguish substantive computational hypotheses from deviant cases.
1(1) and (2) are reformulations of the claims that Chalmers dubs ‘(a)’ and ‘(b)’ on p.1 (abstract). Computational sufficiency – the idea that the right kind of computational structure suffices for the possession of a mind – and computational explanation – the idea that computation provides a general framework for the explanation of cognitive processes – are claimed (p.1) to be consequences of (a) and (b). 2What is not contingent is that given this computational structure we have the cognitive capacities that we do have. 3Chalmers claims (p.5) that combinatorial-state automata provide a basis for a general account of implementation, arguing that we can re-describe other computational formalisms in CSA terms. He explicitly mentions Turing machines, cellular automata, and non-deterministic and probabilistic automata. However, there is no reason to think that arbitrary machines and CSAs will have the same complexity profiles. A re-description of Ullman’s structure-from-motion device in CSA terms will impose structure that is not really there. 4Not just, as in Putnam’s example, for the arguments that happen to be realized during the specified time period. 5More generally, a theory characterizing the neural implementation of a computational model is likely to posit systems of neurons that compute the function. Recall the earlier point that the computational and physical levels must be independently specified. Individual neural structures will often receive no computational interpretation, thus there will be neural complexity without corresponding computational complexity. It follows that Chalmers’ gloss (the “short answer”) on his account of implementation – “A physical system implements a given computation when the causal structure of the physical system mirrors the formal structure of the computation” (p.2) – is not generally correct. A complex causal structure may be needed to implement a function with little or no formal structure. 6Just as the size of the display requires restricting the arithmetical functions that a hand calculator can be said to compute. 7For perceptual mechanisms, of course, there are additional sources of constraint on the input side. The theorist trying to characterize the neural implementation of edge detection must look for structures that are differentially sensitive to changes in light intensity.
Chalmers’ account of implementation is said to support two theses that together serve as the foundation for computational cognitive science: (1) a computational description is an abstract characterization of the causal organization of the system; and (2) mental properties are causal invariants. Chalmers dubs the view that emerges from his elaboration and defense of these theses
Minimal computationalism, so understood, appears to involve little more than a commitment to the idea that mental processes are causal processes.8 Indeed, if Gibson’s theory of perception counts as computational
Chalmers takes it to be a virtue of his framework that minimal computationalism is unlikely to be falsified by empirical discoveries about the mind. But in casting his net so widely he has lost what is distinctive about computational explanation as it figures in cognitive science.10
I am certainly not denying that a computational description is an abstract characterization of the causal organization underlying cognition. But I want to stress that a computational description is a
8This construal of Minimal Computationalism is given independent support by the discussion in section 3, where Chalmers introduces the notion of causal topology (the abstract causal organization of the system) and argues that mental properties depend only on causal topology, and that computational descriptions capture causal topology. Thanks to an anonymous referee for pointing this out. 9It is unclear how to understand “appropriate computational form” in the above quote. Perhaps Chalmers means ‘expressible in CSA formalism’, but there is no reason to assume that Gibsonian mechanisms can be so characterized, at least not without introducing unmotivated structure. (Gibson himself insists that they should be treated as ‘black boxes’, from the perspective of psychology, though from a physiological perspective they may be quite complex.) See fn. 3 above. 10It is hard to reconcile the very weak thesis Chalmers endorses as a foundation for the computational study of cognition in the last part of the paper (minimal computationalism) with the demanding requirements levied on implementation earlier (requiring complex combinatorial formal structure). The account of implementation is also claimed to play a key foundational role, but the relation between it and minimal computationalism is not transparent. 11Thanks to an anonymous referee for helpful comments on an earlier version of this paper.